Test Report: QEMU_macOS 19763

                    
                      aa5eddb378ec81f2e43c808f5204b861e96187fd:2024-10-07:36541
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.38
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.97
27 TestAddons/Setup 10.01
28 TestCertOptions 10.02
29 TestCertExpiration 195.46
30 TestDockerFlags 10.28
31 TestForceSystemdFlag 10.09
32 TestForceSystemdEnv 10.27
38 TestErrorSpam/setup 9.97
47 TestFunctional/serial/StartWithProxy 9.89
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.78
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.24
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.35
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
107 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
108 TestFunctional/parallel/ServiceCmd/List 0.05
109 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
110 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.31
112 TestFunctional/parallel/ServiceCmd/Format 0.06
113 TestFunctional/parallel/ServiceCmd/URL 0.05
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 106.72
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.87
142 TestMultiControlPlane/serial/DeployApp 108.73
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 50.02
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.42
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 1.99
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.92
165 TestJSONOutput/start/Command 9.71
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.11
197 TestMountStart/serial/StartWithMountFirst 10.54
200 TestMultiNode/serial/FreshStart2Nodes 10.02
201 TestMultiNode/serial/DeployApp2Nodes 106.88
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 55.82
209 TestMultiNode/serial/RestartKeepsNodes 9
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 4.16
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.08
217 TestPreload 9.97
219 TestScheduledStopUnix 10.14
220 TestSkaffold 16.55
223 TestRunningBinaryUpgrade 614.44
225 TestKubernetesUpgrade 18.48
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.92
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.96
241 TestStoppedBinaryUpgrade/Upgrade 564.55
243 TestPause/serial/Start 9.88
253 TestNoKubernetes/serial/StartWithK8s 9.88
254 TestNoKubernetes/serial/StartWithStopK8s 5.85
255 TestNoKubernetes/serial/Start 5.85
259 TestNoKubernetes/serial/StartNoArgs 5.82
261 TestNetworkPlugins/group/auto/Start 9.82
262 TestNetworkPlugins/group/kindnet/Start 10.04
263 TestNetworkPlugins/group/calico/Start 9.79
264 TestNetworkPlugins/group/custom-flannel/Start 9.73
265 TestNetworkPlugins/group/false/Start 9.89
266 TestNetworkPlugins/group/enable-default-cni/Start 9.78
267 TestNetworkPlugins/group/flannel/Start 9.76
269 TestNetworkPlugins/group/bridge/Start 9.88
270 TestNetworkPlugins/group/kubenet/Start 9.91
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.88
274 TestStartStop/group/no-preload/serial/FirstStart 9.85
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
279 TestStartStop/group/no-preload/serial/DeployApp 0.1
280 TestStartStop/group/old-k8s-version/serial/SecondStart 5.29
281 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
284 TestStartStop/group/no-preload/serial/SecondStart 5.9
285 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
286 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
287 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
288 TestStartStop/group/old-k8s-version/serial/Pause 0.11
290 TestStartStop/group/embed-certs/serial/FirstStart 9.9
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/no-preload/serial/Pause 0.11
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.88
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/embed-certs/serial/SecondStart 5.27
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/embed-certs/serial/Pause 0.11
312 TestStartStop/group/newest-cni/serial/FirstStart 9.84
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.27
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (42.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-915000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-915000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (42.374017667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8936385e-9876-4c33-8b1c-acf85ba9347a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-915000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e11d6743-c7a5-429f-9c43-81e01ec54db5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"440f63c8-8c88-4689-96a7-2e76b4a2eb73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig"}}
	{"specversion":"1.0","id":"2fb2416b-227b-4b18-be49-3ca0f494448e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"56b9fe33-489a-4eb4-9e23-09460fd24581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9839feb6-4b78-49f4-87d9-4b7623c22e99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube"}}
	{"specversion":"1.0","id":"8204cf02-3c4c-438b-ad9d-4ea6df6408b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"b0fba377-06a3-4ea9-8e71-8b9d756e14ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0db3a92-f945-493a-8966-f605b2d62f07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3072b365-ad08-47ea-a8b8-283d4c8ba19c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"85f5c805-fd9a-40f8-9250-ff46d1bc3cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-915000\" primary control-plane node in \"download-only-915000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d9c7d2c-470f-4571-942a-0ddad49a4ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4af61917-1212-47f5-8cc4-0a924ac021c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60] Decompressors:map[bz2:0x1400000fca0 gz:0x1400000fca8 tar:0x1400000fc50 tar.bz2:0x1400000fc60 tar.gz:0x1400000fc70 tar.xz:0x1400000fc80 tar.zst:0x1400000fc90 tbz2:0x1400000fc60 tgz:0x14
00000fc70 txz:0x1400000fc80 tzst:0x1400000fc90 xz:0x1400000fcb0 zip:0x1400000fcd0 zst:0x1400000fcb8] Getters:map[file:0x140005b2b30 http:0x140008c00a0 https:0x140008c00f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5ec5cfa1-29d0-4870-bdd4-463222705e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:43:29.857173    6751 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:43:29.857351    6751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:43:29.857354    6751 out.go:358] Setting ErrFile to fd 2...
	I1007 04:43:29.857356    6751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:43:29.857490    6751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	W1007 04:43:29.857618    6751 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19763-6232/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19763-6232/.minikube/config/config.json: no such file or directory
	I1007 04:43:29.859038    6751 out.go:352] Setting JSON to true
	I1007 04:43:29.877048    6751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4380,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:43:29.877127    6751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:43:29.882920    6751 out.go:97] [download-only-915000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:43:29.883056    6751 notify.go:220] Checking for updates...
	W1007 04:43:29.883089    6751 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 04:43:29.885918    6751 out.go:169] MINIKUBE_LOCATION=19763
	I1007 04:43:29.889010    6751 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:43:29.893954    6751 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:43:29.896914    6751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:43:29.899961    6751 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	W1007 04:43:29.905922    6751 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 04:43:29.906164    6751 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:43:29.908903    6751 out.go:97] Using the qemu2 driver based on user configuration
	I1007 04:43:29.908924    6751 start.go:297] selected driver: qemu2
	I1007 04:43:29.908940    6751 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:43:29.909031    6751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:43:29.911914    6751 out.go:169] Automatically selected the socket_vmnet network
	I1007 04:43:29.917416    6751 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 04:43:29.917514    6751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:43:29.917549    6751 cni.go:84] Creating CNI manager for ""
	I1007 04:43:29.917580    6751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 04:43:29.917630    6751 start.go:340] cluster config:
	{Name:download-only-915000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:43:29.922295    6751 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:43:29.925792    6751 out.go:97] Downloading VM boot image ...
	I1007 04:43:29.925821    6751 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1007 04:43:47.639975    6751 out.go:97] Starting "download-only-915000" primary control-plane node in "download-only-915000" cluster
	I1007 04:43:47.639994    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:43:48.361197    6751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 04:43:48.361236    6751 cache.go:56] Caching tarball of preloaded images
	I1007 04:43:48.362146    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:43:48.367145    6751 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 04:43:48.367167    6751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:43:49.512374    6751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 04:44:10.884232    6751 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:44:10.884402    6751 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:44:11.579449    6751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 04:44:11.579648    6751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/download-only-915000/config.json ...
	I1007 04:44:11.579665    6751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/download-only-915000/config.json: {Name:mkb3cda34e00aed3e3b45773ad5a451249c45514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:44:11.579918    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:44:11.580160    6751 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1007 04:44:12.150034    6751 out.go:193] 
	W1007 04:44:12.154106    6751 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60] Decompressors:map[bz2:0x1400000fca0 gz:0x1400000fca8 tar:0x1400000fc50 tar.bz2:0x1400000fc60 tar.gz:0x1400000fc70 tar.xz:0x1400000fc80 tar.zst:0x1400000fc90 tbz2:0x1400000fc60 tgz:0x1400000fc70 txz:0x1400000fc80 tzst:0x1400000fc90 xz:0x1400000fcb0 zip:0x1400000fcd0 zst:0x1400000fcb8] Getters:map[file:0x140005b2b30 http:0x140008c00a0 https:0x140008c00f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1007 04:44:12.154128    6751 out_reason.go:110] 
	W1007 04:44:12.161103    6751 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:44:12.165068    6751 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-915000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (42.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-471000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-471000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.808869208s)

                                                
                                                
-- stdout --
	* [offline-docker-471000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-471000" primary control-plane node in "offline-docker-471000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-471000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:56:12.892026    8141 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:56:12.892191    8141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:12.892195    8141 out.go:358] Setting ErrFile to fd 2...
	I1007 04:56:12.892197    8141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:12.892333    8141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:56:12.893619    8141 out.go:352] Setting JSON to false
	I1007 04:56:12.913012    8141 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5143,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:56:12.913090    8141 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:56:12.918349    8141 out.go:177] * [offline-docker-471000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:56:12.926362    8141 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:56:12.926384    8141 notify.go:220] Checking for updates...
	I1007 04:56:12.933345    8141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:56:12.936327    8141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:56:12.939383    8141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:56:12.942333    8141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:56:12.945309    8141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:56:12.948748    8141 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:12.948801    8141 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:56:12.953309    8141 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:56:12.960359    8141 start.go:297] selected driver: qemu2
	I1007 04:56:12.960366    8141 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:56:12.960373    8141 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:56:12.962543    8141 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:56:12.965339    8141 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:56:12.968395    8141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:56:12.968415    8141 cni.go:84] Creating CNI manager for ""
	I1007 04:56:12.968436    8141 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:56:12.968439    8141 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:56:12.968478    8141 start.go:340] cluster config:
	{Name:offline-docker-471000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:56:12.973127    8141 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:56:12.981350    8141 out.go:177] * Starting "offline-docker-471000" primary control-plane node in "offline-docker-471000" cluster
	I1007 04:56:12.985343    8141 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:56:12.985377    8141 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:56:12.985386    8141 cache.go:56] Caching tarball of preloaded images
	I1007 04:56:12.985474    8141 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:56:12.985481    8141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:56:12.985544    8141 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/offline-docker-471000/config.json ...
	I1007 04:56:12.985554    8141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/offline-docker-471000/config.json: {Name:mk23a7c3cb82963893d1967cf7422f24c5ec5968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:56:12.985829    8141 start.go:360] acquireMachinesLock for offline-docker-471000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:12.985879    8141 start.go:364] duration metric: took 43.125µs to acquireMachinesLock for "offline-docker-471000"
	I1007 04:56:12.985891    8141 start.go:93] Provisioning new machine with config: &{Name:offline-docker-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:12.985924    8141 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:12.990351    8141 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:13.005781    8141 start.go:159] libmachine.API.Create for "offline-docker-471000" (driver="qemu2")
	I1007 04:56:13.005809    8141 client.go:168] LocalClient.Create starting
	I1007 04:56:13.005900    8141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:13.005938    8141 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:13.005951    8141 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:13.005994    8141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:13.006023    8141 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:13.006032    8141 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:13.006478    8141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:13.151971    8141 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:13.223117    8141 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:13.223130    8141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:13.223334    8141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:13.233765    8141 main.go:141] libmachine: STDOUT: 
	I1007 04:56:13.233789    8141 main.go:141] libmachine: STDERR: 
	I1007 04:56:13.233858    8141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2 +20000M
	I1007 04:56:13.248128    8141 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:13.248146    8141 main.go:141] libmachine: STDERR: 
	I1007 04:56:13.248177    8141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:13.248183    8141 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:13.248196    8141 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:13.248237    8141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:09:18:35:18:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:13.250120    8141 main.go:141] libmachine: STDOUT: 
	I1007 04:56:13.250142    8141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:13.250164    8141 client.go:171] duration metric: took 244.349875ms to LocalClient.Create
	I1007 04:56:15.250384    8141 start.go:128] duration metric: took 2.264456792s to createHost
	I1007 04:56:15.250397    8141 start.go:83] releasing machines lock for "offline-docker-471000", held for 2.264520791s
	W1007 04:56:15.250409    8141 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:15.259013    8141 out.go:177] * Deleting "offline-docker-471000" in qemu2 ...
	W1007 04:56:15.267658    8141 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:15.267667    8141 start.go:729] Will try again in 5 seconds ...
	I1007 04:56:20.269916    8141 start.go:360] acquireMachinesLock for offline-docker-471000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:20.270445    8141 start.go:364] duration metric: took 417.833µs to acquireMachinesLock for "offline-docker-471000"
	I1007 04:56:20.270582    8141 start.go:93] Provisioning new machine with config: &{Name:offline-docker-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:20.270910    8141 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:20.279592    8141 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:20.329988    8141 start.go:159] libmachine.API.Create for "offline-docker-471000" (driver="qemu2")
	I1007 04:56:20.330040    8141 client.go:168] LocalClient.Create starting
	I1007 04:56:20.330192    8141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:20.330280    8141 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:20.330301    8141 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:20.330359    8141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:20.330434    8141 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:20.330462    8141 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:20.331035    8141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:20.484819    8141 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:20.597680    8141 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:20.597688    8141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:20.597889    8141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:20.607623    8141 main.go:141] libmachine: STDOUT: 
	I1007 04:56:20.607651    8141 main.go:141] libmachine: STDERR: 
	I1007 04:56:20.607728    8141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2 +20000M
	I1007 04:56:20.616497    8141 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:20.616513    8141 main.go:141] libmachine: STDERR: 
	I1007 04:56:20.616531    8141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:20.616536    8141 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:20.616546    8141 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:20.616574    8141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:9c:fe:d7:fa:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/offline-docker-471000/disk.qcow2
	I1007 04:56:20.618364    8141 main.go:141] libmachine: STDOUT: 
	I1007 04:56:20.618377    8141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:20.618390    8141 client.go:171] duration metric: took 288.342583ms to LocalClient.Create
	I1007 04:56:22.620558    8141 start.go:128] duration metric: took 2.349616542s to createHost
	I1007 04:56:22.620634    8141 start.go:83] releasing machines lock for "offline-docker-471000", held for 2.350171209s
	W1007 04:56:22.621188    8141 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-471000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-471000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:22.635521    8141 out.go:201] 
	W1007 04:56:22.639659    8141 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:56:22.639696    8141 out.go:270] * 
	* 
	W1007 04:56:22.642543    8141 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:56:22.652598    8141 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-471000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-07 04:56:22.670395 -0700 PDT m=+772.884949418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-471000 -n offline-docker-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-471000 -n offline-docker-471000: exit status 7 (72.226875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-471000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-471000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-471000
I1007 04:56:22.825516    6750 install.go:79] stdout: 
W1007 04:56:22.825653    6750 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1007 04:56:22.825671    6750 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit]
--- FAIL: TestOffline (9.97s)

                                                
                                    
x
+
TestAddons/Setup (10.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-193000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-193000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.006056917s)

                                                
                                                
-- stdout --
	* [addons-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-193000" primary control-plane node in "addons-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:44:32.051232    6825 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:44:32.051402    6825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:44:32.051405    6825 out.go:358] Setting ErrFile to fd 2...
	I1007 04:44:32.051408    6825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:44:32.051541    6825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:44:32.052684    6825 out.go:352] Setting JSON to false
	I1007 04:44:32.070372    6825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4443,"bootTime":1728297029,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:44:32.070437    6825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:44:32.074891    6825 out.go:177] * [addons-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:44:32.082828    6825 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:44:32.082881    6825 notify.go:220] Checking for updates...
	I1007 04:44:32.089816    6825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:44:32.092804    6825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:44:32.095900    6825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:44:32.098805    6825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:44:32.105756    6825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:44:32.109064    6825 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:44:32.112803    6825 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:44:32.118785    6825 start.go:297] selected driver: qemu2
	I1007 04:44:32.118791    6825 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:44:32.118798    6825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:44:32.121255    6825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:44:32.124759    6825 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:44:32.127922    6825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:44:32.127945    6825 cni.go:84] Creating CNI manager for ""
	I1007 04:44:32.127969    6825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:44:32.127973    6825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:44:32.128012    6825 start.go:340] cluster config:
	{Name:addons-193000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:44:32.132821    6825 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:44:32.139820    6825 out.go:177] * Starting "addons-193000" primary control-plane node in "addons-193000" cluster
	I1007 04:44:32.143871    6825 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:44:32.143890    6825 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:44:32.143897    6825 cache.go:56] Caching tarball of preloaded images
	I1007 04:44:32.143994    6825 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:44:32.144001    6825 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:44:32.144228    6825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/addons-193000/config.json ...
	I1007 04:44:32.144239    6825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/addons-193000/config.json: {Name:mk641f0a9ae1725c5bd1d681dd3e6b0e969adec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:44:32.144540    6825 start.go:360] acquireMachinesLock for addons-193000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:44:32.144645    6825 start.go:364] duration metric: took 98.584µs to acquireMachinesLock for "addons-193000"
	I1007 04:44:32.144658    6825 start.go:93] Provisioning new machine with config: &{Name:addons-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:addons-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:44:32.144691    6825 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:44:32.148825    6825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 04:44:32.167067    6825 start.go:159] libmachine.API.Create for "addons-193000" (driver="qemu2")
	I1007 04:44:32.167109    6825 client.go:168] LocalClient.Create starting
	I1007 04:44:32.167249    6825 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:44:32.269589    6825 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:44:32.345403    6825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:44:32.486395    6825 main.go:141] libmachine: Creating SSH key...
	I1007 04:44:32.534840    6825 main.go:141] libmachine: Creating Disk image...
	I1007 04:44:32.534845    6825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:44:32.535046    6825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:32.545004    6825 main.go:141] libmachine: STDOUT: 
	I1007 04:44:32.545027    6825 main.go:141] libmachine: STDERR: 
	I1007 04:44:32.545079    6825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2 +20000M
	I1007 04:44:32.553706    6825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:44:32.553719    6825 main.go:141] libmachine: STDERR: 
	I1007 04:44:32.553734    6825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:32.553741    6825 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:44:32.553779    6825 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:44:32.553819    6825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:d5:9a:1c:00:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:32.555561    6825 main.go:141] libmachine: STDOUT: 
	I1007 04:44:32.555602    6825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:44:32.555635    6825 client.go:171] duration metric: took 388.512958ms to LocalClient.Create
	I1007 04:44:34.557877    6825 start.go:128] duration metric: took 2.413171209s to createHost
	I1007 04:44:34.557958    6825 start.go:83] releasing machines lock for "addons-193000", held for 2.413319709s
	W1007 04:44:34.558030    6825 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:44:34.568934    6825 out.go:177] * Deleting "addons-193000" in qemu2 ...
	W1007 04:44:34.594850    6825 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:44:34.594877    6825 start.go:729] Will try again in 5 seconds ...
	I1007 04:44:39.597016    6825 start.go:360] acquireMachinesLock for addons-193000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:44:39.597603    6825 start.go:364] duration metric: took 501.75µs to acquireMachinesLock for "addons-193000"
	I1007 04:44:39.597736    6825 start.go:93] Provisioning new machine with config: &{Name:addons-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:addons-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:44:39.598017    6825 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:44:39.611870    6825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 04:44:39.661309    6825 start.go:159] libmachine.API.Create for "addons-193000" (driver="qemu2")
	I1007 04:44:39.661369    6825 client.go:168] LocalClient.Create starting
	I1007 04:44:39.661503    6825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:44:39.661573    6825 main.go:141] libmachine: Decoding PEM data...
	I1007 04:44:39.661590    6825 main.go:141] libmachine: Parsing certificate...
	I1007 04:44:39.661666    6825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:44:39.661723    6825 main.go:141] libmachine: Decoding PEM data...
	I1007 04:44:39.661737    6825 main.go:141] libmachine: Parsing certificate...
	I1007 04:44:39.662359    6825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:44:39.814697    6825 main.go:141] libmachine: Creating SSH key...
	I1007 04:44:39.953707    6825 main.go:141] libmachine: Creating Disk image...
	I1007 04:44:39.953714    6825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:44:39.953928    6825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:39.964318    6825 main.go:141] libmachine: STDOUT: 
	I1007 04:44:39.964336    6825 main.go:141] libmachine: STDERR: 
	I1007 04:44:39.964398    6825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2 +20000M
	I1007 04:44:39.972850    6825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:44:39.972866    6825 main.go:141] libmachine: STDERR: 
	I1007 04:44:39.972884    6825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:39.972890    6825 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:44:39.972900    6825 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:44:39.972933    6825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:22:e5:d1:65:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/addons-193000/disk.qcow2
	I1007 04:44:39.974720    6825 main.go:141] libmachine: STDOUT: 
	I1007 04:44:39.974737    6825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:44:39.974750    6825 client.go:171] duration metric: took 313.376458ms to LocalClient.Create
	I1007 04:44:41.976951    6825 start.go:128] duration metric: took 2.3789225s to createHost
	I1007 04:44:41.976998    6825 start.go:83] releasing machines lock for "addons-193000", held for 2.379382042s
	W1007 04:44:41.977361    6825 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:44:41.990044    6825 out.go:201] 
	W1007 04:44:41.994243    6825 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:44:41.994284    6825 out.go:270] * 
	* 
	W1007 04:44:41.996908    6825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:44:42.009998    6825 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-193000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.01s)

                                                
                                    
x
+
TestCertOptions (10.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.747353166s)

                                                
                                                
-- stdout --
	* [cert-options-287000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-287000" primary control-plane node in "cert-options-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (87.594667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-287000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-287000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.034792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-287000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-07 04:56:53.287806 -0700 PDT m=+803.502451001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000: exit status 7 (34.175333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-287000
--- FAIL: TestCertOptions (10.02s)

                                                
                                    
x
+
TestCertExpiration (195.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.119394625s)

                                                
                                                
-- stdout --
	* [cert-expiration-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-557000" primary control-plane node in "cert-expiration-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.195541667s)

                                                
                                                
-- stdout --
	* [cert-expiration-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-557000" primary control-plane node in "cert-expiration-557000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-557000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-557000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-557000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-557000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-557000" primary control-plane node in "cert-expiration-557000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-557000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-557000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-557000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-07 04:59:53.50351 -0700 PDT m=+983.718688960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-557000 -n cert-expiration-557000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-557000 -n cert-expiration-557000: exit status 7 (67.008791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-557000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-557000
--- FAIL: TestCertExpiration (195.46s)

                                                
                                    
x
+
TestDockerFlags (10.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-879000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-879000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0405155s)

                                                
                                                
-- stdout --
	* [docker-flags-879000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-879000" primary control-plane node in "docker-flags-879000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-879000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:56:33.125817    8328 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:56:33.125985    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:33.125988    8328 out.go:358] Setting ErrFile to fd 2...
	I1007 04:56:33.125990    8328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:33.126116    8328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:56:33.130042    8328 out.go:352] Setting JSON to false
	I1007 04:56:33.147909    8328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5164,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:56:33.147983    8328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:56:33.151768    8328 out.go:177] * [docker-flags-879000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:56:33.158776    8328 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:56:33.158848    8328 notify.go:220] Checking for updates...
	I1007 04:56:33.165721    8328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:56:33.168690    8328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:56:33.171803    8328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:56:33.174722    8328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:56:33.177670    8328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:56:33.181114    8328 config.go:182] Loaded profile config "force-systemd-flag-956000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:33.181191    8328 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:33.181241    8328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:56:33.185811    8328 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:56:33.192708    8328 start.go:297] selected driver: qemu2
	I1007 04:56:33.192714    8328 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:56:33.192721    8328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:56:33.195236    8328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:56:33.198772    8328 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:56:33.201755    8328 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1007 04:56:33.201781    8328 cni.go:84] Creating CNI manager for ""
	I1007 04:56:33.201806    8328 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:56:33.201814    8328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:56:33.201845    8328 start.go:340] cluster config:
	{Name:docker-flags-879000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:56:33.206586    8328 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:56:33.214576    8328 out.go:177] * Starting "docker-flags-879000" primary control-plane node in "docker-flags-879000" cluster
	I1007 04:56:33.218724    8328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:56:33.218741    8328 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:56:33.218751    8328 cache.go:56] Caching tarball of preloaded images
	I1007 04:56:33.218852    8328 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:56:33.218859    8328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:56:33.218933    8328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/docker-flags-879000/config.json ...
	I1007 04:56:33.218944    8328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/docker-flags-879000/config.json: {Name:mk5d8429bb5fccdcee287e1fdc4b27989e8adcf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:56:33.219310    8328 start.go:360] acquireMachinesLock for docker-flags-879000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:33.219366    8328 start.go:364] duration metric: took 45.875µs to acquireMachinesLock for "docker-flags-879000"
	I1007 04:56:33.219380    8328 start.go:93] Provisioning new machine with config: &{Name:docker-flags-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:33.219407    8328 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:33.226698    8328 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:33.244615    8328 start.go:159] libmachine.API.Create for "docker-flags-879000" (driver="qemu2")
	I1007 04:56:33.244648    8328 client.go:168] LocalClient.Create starting
	I1007 04:56:33.244715    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:33.244753    8328 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:33.244765    8328 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:33.244808    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:33.244837    8328 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:33.244845    8328 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:33.245221    8328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:33.389088    8328 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:33.623196    8328 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:33.623206    8328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:33.623439    8328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:33.633823    8328 main.go:141] libmachine: STDOUT: 
	I1007 04:56:33.633846    8328 main.go:141] libmachine: STDERR: 
	I1007 04:56:33.633912    8328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2 +20000M
	I1007 04:56:33.642344    8328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:33.642359    8328 main.go:141] libmachine: STDERR: 
	I1007 04:56:33.642377    8328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:33.642383    8328 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:33.642398    8328 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:33.642423    8328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:c9:ca:35:db:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:33.644248    8328 main.go:141] libmachine: STDOUT: 
	I1007 04:56:33.644262    8328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:33.644281    8328 client.go:171] duration metric: took 399.627875ms to LocalClient.Create
	I1007 04:56:35.646510    8328 start.go:128] duration metric: took 2.427070375s to createHost
	I1007 04:56:35.646595    8328 start.go:83] releasing machines lock for "docker-flags-879000", held for 2.427225667s
	W1007 04:56:35.646682    8328 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:35.668749    8328 out.go:177] * Deleting "docker-flags-879000" in qemu2 ...
	W1007 04:56:35.688370    8328 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:35.688393    8328 start.go:729] Will try again in 5 seconds ...
	I1007 04:56:40.690603    8328 start.go:360] acquireMachinesLock for docker-flags-879000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:40.691091    8328 start.go:364] duration metric: took 387.083µs to acquireMachinesLock for "docker-flags-879000"
	I1007 04:56:40.691211    8328 start.go:93] Provisioning new machine with config: &{Name:docker-flags-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:40.691460    8328 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:40.701604    8328 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:40.752775    8328 start.go:159] libmachine.API.Create for "docker-flags-879000" (driver="qemu2")
	I1007 04:56:40.752870    8328 client.go:168] LocalClient.Create starting
	I1007 04:56:40.753035    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:40.753109    8328 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:40.753142    8328 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:40.753208    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:40.753264    8328 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:40.753276    8328 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:40.753848    8328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:40.912788    8328 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:41.068085    8328 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:41.068101    8328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:41.068297    8328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:41.078214    8328 main.go:141] libmachine: STDOUT: 
	I1007 04:56:41.078266    8328 main.go:141] libmachine: STDERR: 
	I1007 04:56:41.078325    8328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2 +20000M
	I1007 04:56:41.086685    8328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:41.086702    8328 main.go:141] libmachine: STDERR: 
	I1007 04:56:41.086719    8328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:41.086725    8328 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:41.086733    8328 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:41.086768    8328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8f:6b:bd:d9:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/docker-flags-879000/disk.qcow2
	I1007 04:56:41.088614    8328 main.go:141] libmachine: STDOUT: 
	I1007 04:56:41.088665    8328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:41.088678    8328 client.go:171] duration metric: took 335.803ms to LocalClient.Create
	I1007 04:56:43.090845    8328 start.go:128] duration metric: took 2.399362875s to createHost
	I1007 04:56:43.090889    8328 start.go:83] releasing machines lock for "docker-flags-879000", held for 2.399782208s
	W1007 04:56:43.091261    8328 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:43.102839    8328 out.go:201] 
	W1007 04:56:43.107041    8328 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:56:43.107072    8328 out.go:270] * 
	* 
	W1007 04:56:43.109756    8328 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:56:43.119901    8328 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-879000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-879000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-879000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.686834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-879000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-879000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-879000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-879000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-879000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-879000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-879000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-879000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-879000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.1985ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-879000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-879000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-879000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-879000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-879000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-879000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-07 04:56:43.263499 -0700 PDT m=+793.478114460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-879000 -n docker-flags-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-879000 -n docker-flags-879000: exit status 7 (33.155667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-879000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-879000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-879000
--- FAIL: TestDockerFlags (10.28s)

                                                
                                    
x
+
TestForceSystemdFlag (10.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-956000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-956000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.88802625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-956000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-956000" primary control-plane node in "force-systemd-flag-956000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-956000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:56:28.124825    8307 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:56:28.125002    8307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:28.125005    8307 out.go:358] Setting ErrFile to fd 2...
	I1007 04:56:28.125007    8307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:28.125150    8307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:56:28.126326    8307 out.go:352] Setting JSON to false
	I1007 04:56:28.143955    8307 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5159,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:56:28.144027    8307 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:56:28.150283    8307 out.go:177] * [force-systemd-flag-956000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:56:28.162219    8307 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:56:28.162264    8307 notify.go:220] Checking for updates...
	I1007 04:56:28.169208    8307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:56:28.173243    8307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:56:28.176213    8307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:56:28.179270    8307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:56:28.182242    8307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:56:28.183911    8307 config.go:182] Loaded profile config "force-systemd-env-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:28.183990    8307 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:28.184033    8307 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:56:28.188245    8307 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:56:28.195071    8307 start.go:297] selected driver: qemu2
	I1007 04:56:28.195077    8307 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:56:28.195083    8307 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:56:28.197453    8307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:56:28.200223    8307 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:56:28.203394    8307 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:56:28.203408    8307 cni.go:84] Creating CNI manager for ""
	I1007 04:56:28.203432    8307 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:56:28.203454    8307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:56:28.203493    8307 start.go:340] cluster config:
	{Name:force-systemd-flag-956000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:56:28.208228    8307 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:56:28.216179    8307 out.go:177] * Starting "force-systemd-flag-956000" primary control-plane node in "force-systemd-flag-956000" cluster
	I1007 04:56:28.220231    8307 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:56:28.220245    8307 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:56:28.220251    8307 cache.go:56] Caching tarball of preloaded images
	I1007 04:56:28.220320    8307 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:56:28.220326    8307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:56:28.220390    8307 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/force-systemd-flag-956000/config.json ...
	I1007 04:56:28.220402    8307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/force-systemd-flag-956000/config.json: {Name:mk1e5b21c82f598f995b2d9c81140f9375f0e94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:56:28.220656    8307 start.go:360] acquireMachinesLock for force-systemd-flag-956000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:28.220728    8307 start.go:364] duration metric: took 45.041µs to acquireMachinesLock for "force-systemd-flag-956000"
	I1007 04:56:28.220740    8307 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-956000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:28.220767    8307 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:28.229236    8307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:28.247423    8307 start.go:159] libmachine.API.Create for "force-systemd-flag-956000" (driver="qemu2")
	I1007 04:56:28.247456    8307 client.go:168] LocalClient.Create starting
	I1007 04:56:28.247532    8307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:28.247575    8307 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:28.247585    8307 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:28.247630    8307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:28.247662    8307 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:28.247671    8307 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:28.248105    8307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:28.390014    8307 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:28.480656    8307 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:28.480663    8307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:28.480844    8307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:28.490632    8307 main.go:141] libmachine: STDOUT: 
	I1007 04:56:28.490646    8307 main.go:141] libmachine: STDERR: 
	I1007 04:56:28.490712    8307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2 +20000M
	I1007 04:56:28.499034    8307 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:28.499053    8307 main.go:141] libmachine: STDERR: 
	I1007 04:56:28.499066    8307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:28.499071    8307 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:28.499084    8307 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:28.499122    8307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:7c:30:ec:c5:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:28.500911    8307 main.go:141] libmachine: STDOUT: 
	I1007 04:56:28.500935    8307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:28.500957    8307 client.go:171] duration metric: took 253.495125ms to LocalClient.Create
	I1007 04:56:30.503166    8307 start.go:128] duration metric: took 2.282373792s to createHost
	I1007 04:56:30.503242    8307 start.go:83] releasing machines lock for "force-systemd-flag-956000", held for 2.282506167s
	W1007 04:56:30.503327    8307 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:30.527398    8307 out.go:177] * Deleting "force-systemd-flag-956000" in qemu2 ...
	W1007 04:56:30.546695    8307 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:30.546711    8307 start.go:729] Will try again in 5 seconds ...
	I1007 04:56:35.548883    8307 start.go:360] acquireMachinesLock for force-systemd-flag-956000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:35.646752    8307 start.go:364] duration metric: took 97.763083ms to acquireMachinesLock for "force-systemd-flag-956000"
	I1007 04:56:35.646864    8307 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-956000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:35.647145    8307 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:35.660770    8307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:35.708974    8307 start.go:159] libmachine.API.Create for "force-systemd-flag-956000" (driver="qemu2")
	I1007 04:56:35.709042    8307 client.go:168] LocalClient.Create starting
	I1007 04:56:35.709248    8307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:35.709332    8307 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:35.709347    8307 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:35.709418    8307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:35.709485    8307 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:35.709498    8307 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:35.710208    8307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:35.863624    8307 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:35.907582    8307 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:35.907587    8307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:35.907777    8307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:35.917823    8307 main.go:141] libmachine: STDOUT: 
	I1007 04:56:35.917843    8307 main.go:141] libmachine: STDERR: 
	I1007 04:56:35.917894    8307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2 +20000M
	I1007 04:56:35.926333    8307 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:35.926347    8307 main.go:141] libmachine: STDERR: 
	I1007 04:56:35.926359    8307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:35.926366    8307 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:35.926376    8307 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:35.926410    8307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:6f:cf:c1:51:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-flag-956000/disk.qcow2
	I1007 04:56:35.928257    8307 main.go:141] libmachine: STDOUT: 
	I1007 04:56:35.928271    8307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:35.928284    8307 client.go:171] duration metric: took 219.224959ms to LocalClient.Create
	I1007 04:56:37.930450    8307 start.go:128] duration metric: took 2.283280833s to createHost
	I1007 04:56:37.930499    8307 start.go:83] releasing machines lock for "force-systemd-flag-956000", held for 2.283730125s
	W1007 04:56:37.930870    8307 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:37.944477    8307 out.go:201] 
	W1007 04:56:37.952789    8307 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:56:37.952844    8307 out.go:270] * 
	* 
	W1007 04:56:37.955454    8307 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:56:37.967547    8307 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-956000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-956000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-956000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (87.488333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-956000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-956000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-956000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-07 04:56:38.071708 -0700 PDT m=+788.286308335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-956000 -n force-systemd-flag-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-956000 -n force-systemd-flag-956000: exit status 7 (37.062833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-956000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-956000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-956000
--- FAIL: TestForceSystemdFlag (10.09s)

                                                
                                    
x
+
TestForceSystemdEnv (10.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-994000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1007 04:56:22.838771    6750 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit]
I1007 04:56:22.852553    6750 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit]
I1007 04:56:22.865074    6750 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit]
I1007 04:56:22.886647    6750 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 04:56:22.886764    6750 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1007 04:56:24.725545    6750 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1007 04:56:24.725566    6750 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1007 04:56:24.725609    6750 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1007 04:56:24.725641    6750 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit
I1007 04:56:25.137708    6750 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380] Decompressors:map[bz2:0x1400046e230 gz:0x1400046e238 tar:0x1400046e1e0 tar.bz2:0x1400046e1f0 tar.gz:0x1400046e200 tar.xz:0x1400046e210 tar.zst:0x1400046e220 tbz2:0x1400046e1f0 tgz:0x1400046e200 txz:0x1400046e210 tzst:0x1400046e220 xz:0x1400046e240 zip:0x1400046e250 zst:0x1400046e248] Getters:map[file:0x1400097bee0 http:0x1400052a550 https:0x1400052a5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1007 04:56:25.137822    6750 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit
I1007 04:56:28.041633    6750 install.go:79] stdout: 
W1007 04:56:28.041830    6750 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1007 04:56:28.041868    6750 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit]
I1007 04:56:28.058508    6750 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit]
I1007 04:56:28.071552    6750 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit]
I1007 04:56:28.081975    6750 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-994000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.052241208s)

                                                
                                                
-- stdout --
	* [force-systemd-env-994000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-994000" primary control-plane node in "force-systemd-env-994000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-994000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:56:22.860166    8277 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:56:22.860303    8277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:22.860307    8277 out.go:358] Setting ErrFile to fd 2...
	I1007 04:56:22.860309    8277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:56:22.860425    8277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:56:22.861579    8277 out.go:352] Setting JSON to false
	I1007 04:56:22.881172    8277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5153,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:56:22.881252    8277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:56:22.886082    8277 out.go:177] * [force-systemd-env-994000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:56:22.893194    8277 notify.go:220] Checking for updates...
	I1007 04:56:22.897007    8277 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:56:22.900144    8277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:56:22.903097    8277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:56:22.906094    8277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:56:22.909105    8277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:56:22.912087    8277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1007 04:56:22.915379    8277 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:56:22.915428    8277 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:56:22.919063    8277 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:56:22.925061    8277 start.go:297] selected driver: qemu2
	I1007 04:56:22.925066    8277 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:56:22.925071    8277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:56:22.927606    8277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:56:22.930065    8277 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:56:22.933176    8277 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:56:22.933190    8277 cni.go:84] Creating CNI manager for ""
	I1007 04:56:22.933221    8277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:56:22.933225    8277 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:56:22.933273    8277 start.go:340] cluster config:
	{Name:force-systemd-env-994000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-994000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:56:22.938329    8277 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:56:22.942032    8277 out.go:177] * Starting "force-systemd-env-994000" primary control-plane node in "force-systemd-env-994000" cluster
	I1007 04:56:22.950103    8277 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:56:22.950128    8277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:56:22.950135    8277 cache.go:56] Caching tarball of preloaded images
	I1007 04:56:22.950227    8277 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:56:22.950234    8277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:56:22.950302    8277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/force-systemd-env-994000/config.json ...
	I1007 04:56:22.950314    8277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/force-systemd-env-994000/config.json: {Name:mk141b6f3b304b96a1681637c84b9e7dded051bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:56:22.950612    8277 start.go:360] acquireMachinesLock for force-systemd-env-994000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:22.950665    8277 start.go:364] duration metric: took 45.958µs to acquireMachinesLock for "force-systemd-env-994000"
	I1007 04:56:22.950680    8277 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-994000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:22.950713    8277 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:22.955135    8277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:22.972162    8277 start.go:159] libmachine.API.Create for "force-systemd-env-994000" (driver="qemu2")
	I1007 04:56:22.972200    8277 client.go:168] LocalClient.Create starting
	I1007 04:56:22.972266    8277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:22.972314    8277 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:22.972328    8277 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:22.972368    8277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:22.972396    8277 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:22.972419    8277 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:22.972837    8277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:23.110869    8277 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:23.263945    8277 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:23.263954    8277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:23.264165    8277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:23.274582    8277 main.go:141] libmachine: STDOUT: 
	I1007 04:56:23.274602    8277 main.go:141] libmachine: STDERR: 
	I1007 04:56:23.274670    8277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2 +20000M
	I1007 04:56:23.283592    8277 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:23.283634    8277 main.go:141] libmachine: STDERR: 
	I1007 04:56:23.283647    8277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:23.283654    8277 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:23.283666    8277 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:23.283693    8277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:98:10:7b:e2:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:23.285641    8277 main.go:141] libmachine: STDOUT: 
	I1007 04:56:23.285674    8277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:23.285696    8277 client.go:171] duration metric: took 313.493834ms to LocalClient.Create
	I1007 04:56:25.288004    8277 start.go:128] duration metric: took 2.337250416s to createHost
	I1007 04:56:25.288086    8277 start.go:83] releasing machines lock for "force-systemd-env-994000", held for 2.337415125s
	W1007 04:56:25.288137    8277 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:25.302304    8277 out.go:177] * Deleting "force-systemd-env-994000" in qemu2 ...
	W1007 04:56:25.326163    8277 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:25.326191    8277 start.go:729] Will try again in 5 seconds ...
	I1007 04:56:30.328347    8277 start.go:360] acquireMachinesLock for force-systemd-env-994000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:56:30.503387    8277 start.go:364] duration metric: took 174.897625ms to acquireMachinesLock for "force-systemd-env-994000"
	I1007 04:56:30.503500    8277 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-994000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:56:30.503760    8277 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:56:30.517417    8277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 04:56:30.565638    8277 start.go:159] libmachine.API.Create for "force-systemd-env-994000" (driver="qemu2")
	I1007 04:56:30.565680    8277 client.go:168] LocalClient.Create starting
	I1007 04:56:30.565840    8277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:56:30.565922    8277 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:30.565940    8277 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:30.566005    8277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:56:30.566061    8277 main.go:141] libmachine: Decoding PEM data...
	I1007 04:56:30.566072    8277 main.go:141] libmachine: Parsing certificate...
	I1007 04:56:30.566732    8277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:56:30.719936    8277 main.go:141] libmachine: Creating SSH key...
	I1007 04:56:30.806663    8277 main.go:141] libmachine: Creating Disk image...
	I1007 04:56:30.806668    8277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:56:30.806850    8277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:30.816827    8277 main.go:141] libmachine: STDOUT: 
	I1007 04:56:30.816849    8277 main.go:141] libmachine: STDERR: 
	I1007 04:56:30.816905    8277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2 +20000M
	I1007 04:56:30.825512    8277 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:56:30.825529    8277 main.go:141] libmachine: STDERR: 
	I1007 04:56:30.825546    8277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:30.825551    8277 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:56:30.825560    8277 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:56:30.825593    8277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bc:f7:d2:bc:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/force-systemd-env-994000/disk.qcow2
	I1007 04:56:30.827455    8277 main.go:141] libmachine: STDOUT: 
	I1007 04:56:30.827473    8277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:56:30.827490    8277 client.go:171] duration metric: took 261.805833ms to LocalClient.Create
	I1007 04:56:32.829728    8277 start.go:128] duration metric: took 2.32594375s to createHost
	I1007 04:56:32.829820    8277 start.go:83] releasing machines lock for "force-systemd-env-994000", held for 2.326379292s
	W1007 04:56:32.830129    8277 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-994000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-994000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:56:32.841767    8277 out.go:201] 
	W1007 04:56:32.849007    8277 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:56:32.849038    8277 out.go:270] * 
	* 
	W1007 04:56:32.851704    8277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:56:32.863713    8277 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-994000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-994000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-994000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.337458ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-994000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-994000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-994000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-07 04:56:32.970788 -0700 PDT m=+783.185373001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-994000 -n force-systemd-env-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-994000 -n force-systemd-env-994000: exit status 7 (33.635375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-994000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-994000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-994000
--- FAIL: TestForceSystemdEnv (10.27s)

                                                
                                    
x
+
TestErrorSpam/setup (9.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-744000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-744000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 --driver=qemu2 : exit status 80 (9.968959292s)

                                                
                                                
-- stdout --
	* [nospam-744000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-744000" primary control-plane node in "nospam-744000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-744000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-744000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-744000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19763
- KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-744000" primary control-plane node in "nospam-744000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-744000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.97s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-418000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.814508416s)

                                                
                                                
-- stdout --
	* [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-418000" primary control-plane node in "functional-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-418000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19763
- KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-418000" primary control-plane node in "functional-418000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-418000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51079 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (74.818958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 04:45:14.018386    6750 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-418000 --alsologtostderr -v=8: exit status 80 (5.190699041s)

                                                
                                                
-- stdout --
	* [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-418000" primary control-plane node in "functional-418000" cluster
	* Restarting existing qemu2 VM for "functional-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:45:14.051549    6972 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:45:14.051725    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:45:14.051728    6972 out.go:358] Setting ErrFile to fd 2...
	I1007 04:45:14.051730    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:45:14.051843    6972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:45:14.052991    6972 out.go:352] Setting JSON to false
	I1007 04:45:14.070627    6972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4485,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:45:14.070692    6972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:45:14.075794    6972 out.go:177] * [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:45:14.082673    6972 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:45:14.082718    6972 notify.go:220] Checking for updates...
	I1007 04:45:14.089646    6972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:45:14.092683    6972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:45:14.095681    6972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:45:14.096977    6972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:45:14.099646    6972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:45:14.103010    6972 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:45:14.103066    6972 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:45:14.107545    6972 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:45:14.114661    6972 start.go:297] selected driver: qemu2
	I1007 04:45:14.114667    6972 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:45:14.114749    6972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:45:14.117160    6972 cni.go:84] Creating CNI manager for ""
	I1007 04:45:14.117196    6972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:45:14.117245    6972 start.go:340] cluster config:
	{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:45:14.121702    6972 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:45:14.128664    6972 out.go:177] * Starting "functional-418000" primary control-plane node in "functional-418000" cluster
	I1007 04:45:14.132685    6972 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:45:14.132705    6972 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:45:14.132713    6972 cache.go:56] Caching tarball of preloaded images
	I1007 04:45:14.132799    6972 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:45:14.132805    6972 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:45:14.132870    6972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/functional-418000/config.json ...
	I1007 04:45:14.133328    6972 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:45:14.133359    6972 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "functional-418000"
	I1007 04:45:14.133369    6972 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:45:14.133374    6972 fix.go:54] fixHost starting: 
	I1007 04:45:14.133504    6972 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
	W1007 04:45:14.133514    6972 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:45:14.137683    6972 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
	I1007 04:45:14.145616    6972 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:45:14.145646    6972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
	I1007 04:45:14.147821    6972 main.go:141] libmachine: STDOUT: 
	I1007 04:45:14.147840    6972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:45:14.147872    6972 fix.go:56] duration metric: took 14.496167ms for fixHost
	I1007 04:45:14.147877    6972 start.go:83] releasing machines lock for "functional-418000", held for 14.513208ms
	W1007 04:45:14.147883    6972 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:45:14.147931    6972 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:45:14.147936    6972 start.go:729] Will try again in 5 seconds ...
	I1007 04:45:19.150030    6972 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:45:19.150401    6972 start.go:364] duration metric: took 287.958µs to acquireMachinesLock for "functional-418000"
	I1007 04:45:19.150527    6972 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:45:19.150548    6972 fix.go:54] fixHost starting: 
	I1007 04:45:19.151185    6972 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
	W1007 04:45:19.151211    6972 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:45:19.158612    6972 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
	I1007 04:45:19.162425    6972 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:45:19.162654    6972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
	I1007 04:45:19.172840    6972 main.go:141] libmachine: STDOUT: 
	I1007 04:45:19.172897    6972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:45:19.172955    6972 fix.go:56] duration metric: took 22.412084ms for fixHost
	I1007 04:45:19.172973    6972 start.go:83] releasing machines lock for "functional-418000", held for 22.550166ms
	W1007 04:45:19.173136    6972 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:45:19.181587    6972 out.go:201] 
	W1007 04:45:19.185544    6972 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:45:19.185566    6972 out.go:270] * 
	* 
	W1007 04:45:19.188332    6972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:45:19.195540    6972 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-418000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.192380958s for "functional-418000" cluster.
I1007 04:45:19.211018    6750 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (74.317917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.31025ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-418000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (35.105292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-418000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-418000 get po -A: exit status 1 (26.668917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-418000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-418000\n"*: args "kubectl --context functional-418000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-418000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (34.682333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl images: exit status 83 (45.782542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.403042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-418000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.016958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.979125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-418000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 kubectl -- --context functional-418000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 kubectl -- --context functional-418000 get pods: exit status 1 (746.155583ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-418000
	* no server found for cluster "functional-418000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-418000 kubectl -- --context functional-418000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (36.258417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-418000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-418000 get pods: exit status 1 (1.206929s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-418000
	* no server found for cluster "functional-418000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-418000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (34.5825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.24s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-418000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190455458s)

                                                
                                                
-- stdout --
	* [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-418000" primary control-plane node in "functional-418000" cluster
	* Restarting existing qemu2 VM for "functional-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-418000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.191069709s for "functional-418000" cluster.
I1007 04:45:30.011170    6750 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (74.347292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-418000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-418000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.399292ms)

                                                
                                                
** stderr ** 
	error: context "functional-418000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-418000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (34.460042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 logs: exit status 83 (78.707333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:43 PDT |                     |
	|         | -p download-only-915000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| start   | -o=json --download-only                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | -p download-only-501000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| start   | --download-only -p                                                       | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | binary-mirror-828000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51043                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-828000                                                  | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| addons  | disable dashboard -p                                                     | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | addons-193000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | addons-193000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-193000 --wait=true                                             | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-193000                                                         | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| start   | -p nospam-744000 -n=1 --memory=2250 --wait=false                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:45 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-744000                                                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
	| cache   | functional-418000 cache delete                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	| ssh     | functional-418000 ssh sudo                                               | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-418000                                                        | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-418000 cache reload                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-418000 kubectl --                                             | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | --context functional-418000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 04:45:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 04:45:24.851217    7047 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:45:24.851356    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:45:24.851358    7047 out.go:358] Setting ErrFile to fd 2...
	I1007 04:45:24.851359    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:45:24.851469    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:45:24.852584    7047 out.go:352] Setting JSON to false
	I1007 04:45:24.870057    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4495,"bootTime":1728297029,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:45:24.870150    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:45:24.876700    7047 out.go:177] * [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:45:24.884901    7047 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:45:24.884936    7047 notify.go:220] Checking for updates...
	I1007 04:45:24.891768    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:45:24.894788    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:45:24.897807    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:45:24.900797    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:45:24.902025    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:45:24.905060    7047 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:45:24.905107    7047 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:45:24.909814    7047 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:45:24.914780    7047 start.go:297] selected driver: qemu2
	I1007 04:45:24.914783    7047 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:45:24.914835    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:45:24.917350    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:45:24.917375    7047 cni.go:84] Creating CNI manager for ""
	I1007 04:45:24.917409    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:45:24.917461    7047 start.go:340] cluster config:
	{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:45:24.921887    7047 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:45:24.928801    7047 out.go:177] * Starting "functional-418000" primary control-plane node in "functional-418000" cluster
	I1007 04:45:24.932845    7047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:45:24.932859    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:45:24.932869    7047 cache.go:56] Caching tarball of preloaded images
	I1007 04:45:24.932950    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:45:24.932953    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:45:24.933019    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/functional-418000/config.json ...
	I1007 04:45:24.933431    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:45:24.933478    7047 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-418000"
	I1007 04:45:24.933486    7047 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:45:24.933489    7047 fix.go:54] fixHost starting: 
	I1007 04:45:24.933618    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
	W1007 04:45:24.933627    7047 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:45:24.937771    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
	I1007 04:45:24.945785    7047 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:45:24.945823    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
	I1007 04:45:24.948066    7047 main.go:141] libmachine: STDOUT: 
	I1007 04:45:24.948091    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:45:24.948123    7047 fix.go:56] duration metric: took 14.633708ms for fixHost
	I1007 04:45:24.948126    7047 start.go:83] releasing machines lock for "functional-418000", held for 14.64475ms
	W1007 04:45:24.948132    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:45:24.948172    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:45:24.948176    7047 start.go:729] Will try again in 5 seconds ...
	I1007 04:45:29.949917    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:45:29.950269    7047 start.go:364] duration metric: took 297.667µs to acquireMachinesLock for "functional-418000"
	I1007 04:45:29.950398    7047 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:45:29.950409    7047 fix.go:54] fixHost starting: 
	I1007 04:45:29.951070    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
	W1007 04:45:29.951089    7047 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:45:29.958449    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
	I1007 04:45:29.962485    7047 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:45:29.962682    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
	I1007 04:45:29.972874    7047 main.go:141] libmachine: STDOUT: 
	I1007 04:45:29.972926    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:45:29.973001    7047 fix.go:56] duration metric: took 22.5915ms for fixHost
	I1007 04:45:29.973009    7047 start.go:83] releasing machines lock for "functional-418000", held for 22.717667ms
	W1007 04:45:29.973176    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:45:29.981492    7047 out.go:201] 
	W1007 04:45:29.985549    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:45:29.985576    7047 out.go:270] * 
	W1007 04:45:29.988127    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:45:29.996426    7047 out.go:201] 
	
	
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-418000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:43 PDT |                     |
|         | -p download-only-915000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | -o=json --download-only                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | -p download-only-501000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | --download-only -p                                                       | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | binary-mirror-828000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51043                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-828000                                                  | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| addons  | disable dashboard -p                                                     | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | addons-193000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | addons-193000                                                            |                      |         |         |                     |                     |
| start   | -p addons-193000 --wait=true                                             | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-193000                                                         | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | -p nospam-744000 -n=1 --memory=2250 --wait=false                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:45 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-744000                                                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
| cache   | functional-418000 cache delete                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| ssh     | functional-418000 ssh sudo                                               | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-418000                                                        | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-418000 cache reload                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-418000 kubectl --                                             | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --context functional-418000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/07 04:45:24
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1007 04:45:24.851217    7047 out.go:345] Setting OutFile to fd 1 ...
I1007 04:45:24.851356    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:24.851358    7047 out.go:358] Setting ErrFile to fd 2...
I1007 04:45:24.851359    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:24.851469    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:45:24.852584    7047 out.go:352] Setting JSON to false
I1007 04:45:24.870057    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4495,"bootTime":1728297029,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1007 04:45:24.870150    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1007 04:45:24.876700    7047 out.go:177] * [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1007 04:45:24.884901    7047 out.go:177]   - MINIKUBE_LOCATION=19763
I1007 04:45:24.884936    7047 notify.go:220] Checking for updates...
I1007 04:45:24.891768    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
I1007 04:45:24.894788    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1007 04:45:24.897807    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1007 04:45:24.900797    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
I1007 04:45:24.902025    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1007 04:45:24.905060    7047 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:45:24.905107    7047 driver.go:394] Setting default libvirt URI to qemu:///system
I1007 04:45:24.909814    7047 out.go:177] * Using the qemu2 driver based on existing profile
I1007 04:45:24.914780    7047 start.go:297] selected driver: qemu2
I1007 04:45:24.914783    7047 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 04:45:24.914835    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1007 04:45:24.917350    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1007 04:45:24.917375    7047 cni.go:84] Creating CNI manager for ""
I1007 04:45:24.917409    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1007 04:45:24.917461    7047 start.go:340] cluster config:
{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 04:45:24.921887    7047 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 04:45:24.928801    7047 out.go:177] * Starting "functional-418000" primary control-plane node in "functional-418000" cluster
I1007 04:45:24.932845    7047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 04:45:24.932859    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1007 04:45:24.932869    7047 cache.go:56] Caching tarball of preloaded images
I1007 04:45:24.932950    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1007 04:45:24.932953    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1007 04:45:24.933019    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/functional-418000/config.json ...
I1007 04:45:24.933431    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 04:45:24.933478    7047 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-418000"
I1007 04:45:24.933486    7047 start.go:96] Skipping create...Using existing machine configuration
I1007 04:45:24.933489    7047 fix.go:54] fixHost starting: 
I1007 04:45:24.933618    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
W1007 04:45:24.933627    7047 fix.go:138] unexpected machine state, will restart: <nil>
I1007 04:45:24.937771    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
I1007 04:45:24.945785    7047 qemu.go:418] Using hvf for hardware acceleration
I1007 04:45:24.945823    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
I1007 04:45:24.948066    7047 main.go:141] libmachine: STDOUT: 
I1007 04:45:24.948091    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 04:45:24.948123    7047 fix.go:56] duration metric: took 14.633708ms for fixHost
I1007 04:45:24.948126    7047 start.go:83] releasing machines lock for "functional-418000", held for 14.64475ms
W1007 04:45:24.948132    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 04:45:24.948172    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 04:45:24.948176    7047 start.go:729] Will try again in 5 seconds ...
I1007 04:45:29.949917    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 04:45:29.950269    7047 start.go:364] duration metric: took 297.667µs to acquireMachinesLock for "functional-418000"
I1007 04:45:29.950398    7047 start.go:96] Skipping create...Using existing machine configuration
I1007 04:45:29.950409    7047 fix.go:54] fixHost starting: 
I1007 04:45:29.951070    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
W1007 04:45:29.951089    7047 fix.go:138] unexpected machine state, will restart: <nil>
I1007 04:45:29.958449    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
I1007 04:45:29.962485    7047 qemu.go:418] Using hvf for hardware acceleration
I1007 04:45:29.962682    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
I1007 04:45:29.972874    7047 main.go:141] libmachine: STDOUT: 
I1007 04:45:29.972926    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 04:45:29.973001    7047 fix.go:56] duration metric: took 22.5915ms for fixHost
I1007 04:45:29.973009    7047 start.go:83] releasing machines lock for "functional-418000", held for 22.717667ms
W1007 04:45:29.973176    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 04:45:29.981492    7047 out.go:201] 
W1007 04:45:29.985549    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 04:45:29.985576    7047 out.go:270] * 
W1007 04:45:29.988127    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 04:45:29.996426    7047 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2884839381/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:43 PDT |                     |
|         | -p download-only-915000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | -o=json --download-only                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | -p download-only-501000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-915000                                                  | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| delete  | -p download-only-501000                                                  | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | --download-only -p                                                       | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | binary-mirror-828000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51043                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-828000                                                  | binary-mirror-828000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| addons  | disable dashboard -p                                                     | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | addons-193000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | addons-193000                                                            |                      |         |         |                     |                     |
| start   | -p addons-193000 --wait=true                                             | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-193000                                                         | addons-193000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
| start   | -p nospam-744000 -n=1 --memory=2250 --wait=false                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:45 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-744000 --log_dir                                                  | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-744000                                                         | nospam-744000        | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-418000 cache add                                              | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
| cache   | functional-418000 cache delete                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | minikube-local-cache-test:functional-418000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| ssh     | functional-418000 ssh sudo                                               | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-418000                                                        | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-418000 cache reload                                           | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
| ssh     | functional-418000 ssh                                                    | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT | 07 Oct 24 04:45 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-418000 kubectl --                                             | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --context functional-418000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-418000                                                     | functional-418000    | jenkins | v1.34.0 | 07 Oct 24 04:45 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/07 04:45:24
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1007 04:45:24.851217    7047 out.go:345] Setting OutFile to fd 1 ...
I1007 04:45:24.851356    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:24.851358    7047 out.go:358] Setting ErrFile to fd 2...
I1007 04:45:24.851359    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:24.851469    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:45:24.852584    7047 out.go:352] Setting JSON to false
I1007 04:45:24.870057    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4495,"bootTime":1728297029,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1007 04:45:24.870150    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1007 04:45:24.876700    7047 out.go:177] * [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1007 04:45:24.884901    7047 out.go:177]   - MINIKUBE_LOCATION=19763
I1007 04:45:24.884936    7047 notify.go:220] Checking for updates...
I1007 04:45:24.891768    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
I1007 04:45:24.894788    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1007 04:45:24.897807    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1007 04:45:24.900797    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
I1007 04:45:24.902025    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1007 04:45:24.905060    7047 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:45:24.905107    7047 driver.go:394] Setting default libvirt URI to qemu:///system
I1007 04:45:24.909814    7047 out.go:177] * Using the qemu2 driver based on existing profile
I1007 04:45:24.914780    7047 start.go:297] selected driver: qemu2
I1007 04:45:24.914783    7047 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 04:45:24.914835    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1007 04:45:24.917350    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1007 04:45:24.917375    7047 cni.go:84] Creating CNI manager for ""
I1007 04:45:24.917409    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1007 04:45:24.917461    7047 start.go:340] cluster config:
{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 04:45:24.921887    7047 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 04:45:24.928801    7047 out.go:177] * Starting "functional-418000" primary control-plane node in "functional-418000" cluster
I1007 04:45:24.932845    7047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 04:45:24.932859    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1007 04:45:24.932869    7047 cache.go:56] Caching tarball of preloaded images
I1007 04:45:24.932950    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1007 04:45:24.932953    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1007 04:45:24.933019    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/functional-418000/config.json ...
I1007 04:45:24.933431    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 04:45:24.933478    7047 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-418000"
I1007 04:45:24.933486    7047 start.go:96] Skipping create...Using existing machine configuration
I1007 04:45:24.933489    7047 fix.go:54] fixHost starting: 
I1007 04:45:24.933618    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
W1007 04:45:24.933627    7047 fix.go:138] unexpected machine state, will restart: <nil>
I1007 04:45:24.937771    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
I1007 04:45:24.945785    7047 qemu.go:418] Using hvf for hardware acceleration
I1007 04:45:24.945823    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
I1007 04:45:24.948066    7047 main.go:141] libmachine: STDOUT: 
I1007 04:45:24.948091    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 04:45:24.948123    7047 fix.go:56] duration metric: took 14.633708ms for fixHost
I1007 04:45:24.948126    7047 start.go:83] releasing machines lock for "functional-418000", held for 14.64475ms
W1007 04:45:24.948132    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 04:45:24.948172    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 04:45:24.948176    7047 start.go:729] Will try again in 5 seconds ...
I1007 04:45:29.949917    7047 start.go:360] acquireMachinesLock for functional-418000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 04:45:29.950269    7047 start.go:364] duration metric: took 297.667µs to acquireMachinesLock for "functional-418000"
I1007 04:45:29.950398    7047 start.go:96] Skipping create...Using existing machine configuration
I1007 04:45:29.950409    7047 fix.go:54] fixHost starting: 
I1007 04:45:29.951070    7047 fix.go:112] recreateIfNeeded on functional-418000: state=Stopped err=<nil>
W1007 04:45:29.951089    7047 fix.go:138] unexpected machine state, will restart: <nil>
I1007 04:45:29.958449    7047 out.go:177] * Restarting existing qemu2 VM for "functional-418000" ...
I1007 04:45:29.962485    7047 qemu.go:418] Using hvf for hardware acceleration
I1007 04:45:29.962682    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:af:1e:f2:10:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/functional-418000/disk.qcow2
I1007 04:45:29.972874    7047 main.go:141] libmachine: STDOUT: 
I1007 04:45:29.972926    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 04:45:29.973001    7047 fix.go:56] duration metric: took 22.5915ms for fixHost
I1007 04:45:29.973009    7047 start.go:83] releasing machines lock for "functional-418000", held for 22.717667ms
W1007 04:45:29.973176    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 04:45:29.981492    7047 out.go:201] 
W1007 04:45:29.985549    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 04:45:29.985576    7047 out.go:270] * 
W1007 04:45:29.988127    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 04:45:29.996426    7047 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-418000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-418000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.694792ms)

                                                
                                                
** stderr ** 
	error: context "functional-418000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-418000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-418000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-418000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-418000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-418000 --alsologtostderr -v=1] stderr:
I1007 04:46:13.995902    7374 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:13.996321    7374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:13.996326    7374 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:13.996329    7374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:13.996490    7374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:13.996715    7374 mustload.go:65] Loading cluster: functional-418000
I1007 04:46:13.996919    7374 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.000755    7374 out.go:177] * The control-plane node functional-418000 host is not running: state=Stopped
I1007 04:46:14.004600    7374 out.go:177]   To start a cluster, run: "minikube start -p functional-418000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (46.413125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 status: exit status 7 (34.483334ms)

                                                
                                                
-- stdout --
	functional-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-418000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.476167ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-418000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 status -o json: exit status 7 (34.601958ms)

                                                
                                                
-- stdout --
	{"Name":"functional-418000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-418000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (34.16125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-418000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-418000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.569833ms)

                                                
                                                
** stderr ** 
	error: context "functional-418000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-418000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-418000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-418000 describe po hello-node-connect: exit status 1 (27.094625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-418000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-418000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-418000 logs -l app=hello-node-connect: exit status 1 (26.504042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-418000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-418000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-418000 describe svc hello-node-connect: exit status 1 (27.197792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-418000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (34.367292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-418000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (35.065458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "echo hello": exit status 83 (55.571917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n"*. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "cat /etc/hostname": exit status 83 (46.02725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-418000"- but got *"* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n"*. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (35.3515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (59.906541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /home/docker/cp-test.txt": exit status 83 (48.17475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-418000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-418000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cp functional-418000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd947936493/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 cp functional-418000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd947936493/001/cp-test.txt: exit status 83 (45.4745ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 cp functional-418000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd947936493/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.979541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd947936493/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.704333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.986791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-418000 ssh -n functional-418000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-418000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-418000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/6750/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/test/nested/copy/6750/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/test/nested/copy/6750/hosts": exit status 83 (48.302292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/test/nested/copy/6750/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-418000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-418000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (35.217791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/6750.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/6750.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/6750.pem": exit status 83 (46.117667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/6750.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /etc/ssl/certs/6750.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6750.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/6750.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /usr/share/ca-certificates/6750.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /usr/share/ca-certificates/6750.pem": exit status 83 (44.76925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/6750.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /usr/share/ca-certificates/6750.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6750.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.008375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/67502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/67502.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/67502.pem": exit status 83 (50.243958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/67502.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /etc/ssl/certs/67502.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/67502.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/67502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /usr/share/ca-certificates/67502.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /usr/share/ca-certificates/67502.pem": exit status 83 (46.716375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/67502.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /usr/share/ca-certificates/67502.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/67502.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (64.916375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-418000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-418000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (49.044916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-418000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-418000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.07475ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-418000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-418000 -n functional-418000: exit status 7 (35.702917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo systemctl is-active crio": exit status 83 (42.491291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 version -o=json --components: exit status 83 (45.7525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-418000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-418000 image ls --format short --alsologtostderr:
I1007 04:46:14.445116    7389 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:14.445310    7389 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.445317    7389 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:14.445320    7389 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.445438    7389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:14.445866    7389 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.445933    7389 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-418000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-418000 image ls --format table --alsologtostderr:
I1007 04:46:14.693250    7401 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:14.693430    7401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.693433    7401 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:14.693435    7401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.693586    7401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:14.694026    7401 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.694092    7401 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1007 04:46:26.797882    6750 retry.go:31] will retry after 22.559578818s: Temporary Error: Get "http:": http: no Host in request URL
I1007 04:46:49.359671    6750 retry.go:31] will retry after 30.441080704s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-418000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-418000 image ls --format json --alsologtostderr:
I1007 04:46:14.652234    7399 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:14.652411    7399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.652414    7399 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:14.652417    7399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.652546    7399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:14.653005    7399 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.653065    7399 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-418000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-418000 image ls --format yaml --alsologtostderr:
I1007 04:46:14.485355    7391 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:14.485553    7391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.485556    7391 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:14.485559    7391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.485696    7391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:14.486178    7391 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.486240    7391 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh pgrep buildkitd: exit status 83 (45.861167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image build -t localhost/my-image:functional-418000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-418000 image build -t localhost/my-image:functional-418000 testdata/build --alsologtostderr:
I1007 04:46:14.572339    7395 out.go:345] Setting OutFile to fd 1 ...
I1007 04:46:14.572791    7395 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.572795    7395 out.go:358] Setting ErrFile to fd 2...
I1007 04:46:14.572798    7395 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:46:14.572982    7395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:46:14.573408    7395 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.573892    7395 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:46:14.574130    7395 build_images.go:133] succeeded building to: 
I1007 04:46:14.574134    7395 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
functional_test.go:446: expected "localhost/my-image:functional-418000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-418000 docker-env) && out/minikube-darwin-arm64 status -p functional-418000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-418000 docker-env) && out/minikube-darwin-arm64 status -p functional-418000": exit status 1 (48.760041ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2: exit status 83 (59.734458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:46:14.289616    7383 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:46:14.290236    7383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.290240    7383 out.go:358] Setting ErrFile to fd 2...
	I1007 04:46:14.290243    7383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.290379    7383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:46:14.290625    7383 mustload.go:65] Loading cluster: functional-418000
	I1007 04:46:14.290832    7383 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:46:14.306910    7383 out.go:177] * The control-plane node functional-418000 host is not running: state=Stopped
	I1007 04:46:14.311225    7383 out.go:177]   To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2: exit status 83 (47.676917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:46:14.397413    7387 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:46:14.397613    7387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.397616    7387 out.go:358] Setting ErrFile to fd 2...
	I1007 04:46:14.397618    7387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.397767    7387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:46:14.397998    7387 mustload.go:65] Loading cluster: functional-418000
	I1007 04:46:14.398218    7387 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:46:14.403263    7387 out.go:177] * The control-plane node functional-418000 host is not running: state=Stopped
	I1007 04:46:14.407196    7387 out.go:177]   To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2: exit status 83 (47.572458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:46:14.349636    7385 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:46:14.349803    7385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.349806    7385 out.go:358] Setting ErrFile to fd 2...
	I1007 04:46:14.349808    7385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:14.349960    7385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:46:14.350190    7385 mustload.go:65] Loading cluster: functional-418000
	I1007 04:46:14.350436    7385 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:46:14.355204    7385 out.go:177] * The control-plane node functional-418000 host is not running: state=Stopped
	I1007 04:46:14.359258    7385 out.go:177]   To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-418000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image load --daemon kicbase/echo-server:functional-418000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-418000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-418000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-418000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (28.574834ms)

                                                
                                                
** stderr ** 
	error: context "functional-418000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-418000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 service list: exit status 83 (50.677458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-418000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 service list -o json: exit status 83 (48.54675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-418000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 service --namespace=default --https --url hello-node: exit status 83 (47.275166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-418000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image load --daemon kicbase/echo-server:functional-418000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-418000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 service hello-node --url --format={{.IP}}: exit status 83 (58.735541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-418000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 service hello-node --url: exit status 83 (46.701458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-418000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test.go:1569: failed to parse "* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"": parse "* The control-plane node functional-418000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-418000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-418000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image load --daemon kicbase/echo-server:functional-418000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-418000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1007 04:45:33.095164    7178 out.go:345] Setting OutFile to fd 1 ...
I1007 04:45:33.095378    7178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:33.095381    7178 out.go:358] Setting ErrFile to fd 2...
I1007 04:45:33.095383    7178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:45:33.095534    7178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:45:33.095800    7178 mustload.go:65] Loading cluster: functional-418000
I1007 04:45:33.096041    7178 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:45:33.099950    7178 out.go:177] * The control-plane node functional-418000 host is not running: state=Stopped
I1007 04:45:33.111792    7178 out.go:177]   To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
stdout: * The control-plane node functional-418000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-418000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7177: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-418000": client config: context "functional-418000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1007 04:45:33.172704    6750 retry.go:31] will retry after 1.821344809s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-418000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-418000 get svc nginx-svc: exit status 1 (73.775417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-418000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-418000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image save kicbase/echo-server:functional-418000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-418000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1007 04:47:19.900074    6750 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.03085175s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1007 04:47:45.045110    6750 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:47:55.047776    6750 retry.go:31] will retry after 2.224449539s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1007 04:48:07.276933    6750 retry.go:31] will retry after 2.786666694s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:57963->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-365000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-365000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.793110292s)

                                                
                                                
-- stdout --
	* [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-365000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:48:15.444896    7430 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:48:15.445059    7430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:48:15.445062    7430 out.go:358] Setting ErrFile to fd 2...
	I1007 04:48:15.445064    7430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:48:15.445175    7430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:48:15.446318    7430 out.go:352] Setting JSON to false
	I1007 04:48:15.463886    7430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4666,"bootTime":1728297029,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:48:15.463952    7430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:48:15.469382    7430 out.go:177] * [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:48:15.473357    7430 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:48:15.473400    7430 notify.go:220] Checking for updates...
	I1007 04:48:15.479305    7430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:48:15.482330    7430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:48:15.485350    7430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:48:15.486543    7430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:48:15.489308    7430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:48:15.492444    7430 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:48:15.496188    7430 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:48:15.503373    7430 start.go:297] selected driver: qemu2
	I1007 04:48:15.503379    7430 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:48:15.503387    7430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:48:15.505780    7430 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:48:15.509333    7430 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:48:15.512470    7430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:48:15.512492    7430 cni.go:84] Creating CNI manager for ""
	I1007 04:48:15.512519    7430 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 04:48:15.512529    7430 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 04:48:15.512556    7430 start.go:340] cluster config:
	{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:48:15.517099    7430 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:48:15.525302    7430 out.go:177] * Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	I1007 04:48:15.529321    7430 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:48:15.529342    7430 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:48:15.529353    7430 cache.go:56] Caching tarball of preloaded images
	I1007 04:48:15.529448    7430 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:48:15.529454    7430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:48:15.529699    7430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/ha-365000/config.json ...
	I1007 04:48:15.529711    7430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/ha-365000/config.json: {Name:mk9e038c08231c67fb6c3d4bc71e5fa1f7729816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:48:15.530029    7430 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:48:15.530081    7430 start.go:364] duration metric: took 45.958µs to acquireMachinesLock for "ha-365000"
	I1007 04:48:15.530095    7430 start.go:93] Provisioning new machine with config: &{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:48:15.530143    7430 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:48:15.538298    7430 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:48:15.556168    7430 start.go:159] libmachine.API.Create for "ha-365000" (driver="qemu2")
	I1007 04:48:15.556196    7430 client.go:168] LocalClient.Create starting
	I1007 04:48:15.556300    7430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:48:15.556353    7430 main.go:141] libmachine: Decoding PEM data...
	I1007 04:48:15.556367    7430 main.go:141] libmachine: Parsing certificate...
	I1007 04:48:15.556413    7430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:48:15.556449    7430 main.go:141] libmachine: Decoding PEM data...
	I1007 04:48:15.556459    7430 main.go:141] libmachine: Parsing certificate...
	I1007 04:48:15.556878    7430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:48:15.696979    7430 main.go:141] libmachine: Creating SSH key...
	I1007 04:48:15.801248    7430 main.go:141] libmachine: Creating Disk image...
	I1007 04:48:15.801260    7430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:48:15.801444    7430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:15.811226    7430 main.go:141] libmachine: STDOUT: 
	I1007 04:48:15.811244    7430 main.go:141] libmachine: STDERR: 
	I1007 04:48:15.811308    7430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2 +20000M
	I1007 04:48:15.819691    7430 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:48:15.819705    7430 main.go:141] libmachine: STDERR: 
	I1007 04:48:15.819728    7430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:15.819733    7430 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:48:15.819741    7430 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:48:15.819774    7430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ad:55:8a:67:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:15.821582    7430 main.go:141] libmachine: STDOUT: 
	I1007 04:48:15.821601    7430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:48:15.821623    7430 client.go:171] duration metric: took 265.418791ms to LocalClient.Create
	I1007 04:48:17.823818    7430 start.go:128] duration metric: took 2.293639417s to createHost
	I1007 04:48:17.823900    7430 start.go:83] releasing machines lock for "ha-365000", held for 2.293799208s
	W1007 04:48:17.823946    7430 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:48:17.836954    7430 out.go:177] * Deleting "ha-365000" in qemu2 ...
	W1007 04:48:17.861480    7430 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:48:17.861508    7430 start.go:729] Will try again in 5 seconds ...
	I1007 04:48:22.863756    7430 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:48:22.864304    7430 start.go:364] duration metric: took 445.583µs to acquireMachinesLock for "ha-365000"
	I1007 04:48:22.864418    7430 start.go:93] Provisioning new machine with config: &{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:48:22.864693    7430 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:48:22.878665    7430 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:48:22.928044    7430 start.go:159] libmachine.API.Create for "ha-365000" (driver="qemu2")
	I1007 04:48:22.928097    7430 client.go:168] LocalClient.Create starting
	I1007 04:48:22.928250    7430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:48:22.928332    7430 main.go:141] libmachine: Decoding PEM data...
	I1007 04:48:22.928350    7430 main.go:141] libmachine: Parsing certificate...
	I1007 04:48:22.928415    7430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:48:22.928471    7430 main.go:141] libmachine: Decoding PEM data...
	I1007 04:48:22.928483    7430 main.go:141] libmachine: Parsing certificate...
	I1007 04:48:22.929153    7430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:48:23.083808    7430 main.go:141] libmachine: Creating SSH key...
	I1007 04:48:23.137868    7430 main.go:141] libmachine: Creating Disk image...
	I1007 04:48:23.137874    7430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:48:23.138042    7430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:23.148126    7430 main.go:141] libmachine: STDOUT: 
	I1007 04:48:23.148144    7430 main.go:141] libmachine: STDERR: 
	I1007 04:48:23.148201    7430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2 +20000M
	I1007 04:48:23.156610    7430 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:48:23.156624    7430 main.go:141] libmachine: STDERR: 
	I1007 04:48:23.156641    7430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:23.156645    7430 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:48:23.156652    7430 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:48:23.156692    7430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:26:b5:c6:aa:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:48:23.158518    7430 main.go:141] libmachine: STDOUT: 
	I1007 04:48:23.158532    7430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:48:23.158545    7430 client.go:171] duration metric: took 230.442459ms to LocalClient.Create
	I1007 04:48:25.160730    7430 start.go:128] duration metric: took 2.296003583s to createHost
	I1007 04:48:25.160832    7430 start.go:83] releasing machines lock for "ha-365000", held for 2.296496917s
	W1007 04:48:25.161239    7430 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:48:25.174920    7430 out.go:201] 
	W1007 04:48:25.179051    7430 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:48:25.179082    7430 out.go:270] * 
	* 
	W1007 04:48:25.181469    7430 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:48:25.190957    7430 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-365000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (71.890917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (108.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.96575ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-365000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- rollout status deployment/busybox: exit status 1 (62.898ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.523667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:25.469649    6750 retry.go:31] will retry after 1.40535091s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.159ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:26.986629    6750 retry.go:31] will retry after 764.014487ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.710959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:27.862731    6750 retry.go:31] will retry after 2.599878484s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.41075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:30.572782    6750 retry.go:31] will retry after 4.439816295s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.045375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:35.123997    6750 retry.go:31] will retry after 6.857515127s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.446166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:42.093420    6750 retry.go:31] will retry after 6.753115959s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.316875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:48.958380    6750 retry.go:31] will retry after 9.066181994s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.314667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:48:58.135957    6750 retry.go:31] will retry after 9.324690991s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.6055ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:49:07.572684    6750 retry.go:31] will retry after 37.048004152s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.238667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:49:44.731389    6750 retry.go:31] will retry after 28.886277489s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.375625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.039ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.879833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.default: exit status 1 (63.018917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.950667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (35.054792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (108.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.279333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-365000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.802375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-365000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-365000 -v=7 --alsologtostderr: exit status 83 (45.388125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.143658    7516 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.144080    7516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.144084    7516 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.144088    7516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.144267    7516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.144492    7516 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.144717    7516 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.147919    7516 out.go:177] * The control-plane node ha-365000 host is not running: state=Stopped
	I1007 04:50:14.150869    7516 out.go:177]   To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-365000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (35.027625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-365000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-365000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.150833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-365000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-365000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-365000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (35.086875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-365000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-365000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.106709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status --output json -v=7 --alsologtostderr: exit status 7 (34.522709ms)

                                                
                                                
-- stdout --
	{"Name":"ha-365000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.372390    7528 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.372577    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.372580    7528 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.372582    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.372724    7528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.372859    7528 out.go:352] Setting JSON to true
	I1007 04:50:14.372869    7528 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.372941    7528 notify.go:220] Checking for updates...
	I1007 04:50:14.373095    7528 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.373103    7528 status.go:174] checking status of ha-365000 ...
	I1007 04:50:14.373348    7528 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:14.373352    7528 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:14.373354    7528 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-365000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.641417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.513041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.442496    7532 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.442908    7532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.442912    7532 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.442914    7532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.443030    7532 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.443251    7532 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.443454    7532 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.447335    7532 out.go:201] 
	W1007 04:50:14.450362    7532 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1007 04:50:14.450368    7532 out.go:270] * 
	* 
	W1007 04:50:14.452223    7532 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:50:14.456362    7532 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-365000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (34.543042ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.493940    7534 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.494135    7534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.494138    7534 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.494140    7534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.494292    7534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.494406    7534 out.go:352] Setting JSON to false
	I1007 04:50:14.494417    7534 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.494472    7534 notify.go:220] Checking for updates...
	I1007 04:50:14.494632    7534 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.494642    7534 status.go:174] checking status of ha-365000 ...
	I1007 04:50:14.494890    7534 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:14.494893    7534 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:14.494895    7534 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.836875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-365000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.231209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr: exit status 85 (53.215792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.650468    7543 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.650985    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.650989    7543 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.650991    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.651158    7543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.651481    7543 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.651678    7543 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.656386    7543 out.go:201] 
	W1007 04:50:14.660218    7543 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1007 04:50:14.660224    7543 out.go:270] * 
	* 
	W1007 04:50:14.662056    7543 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:50:14.666327    7543 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1007 04:50:14.650468    7543 out.go:345] Setting OutFile to fd 1 ...
I1007 04:50:14.650985    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:50:14.650989    7543 out.go:358] Setting ErrFile to fd 2...
I1007 04:50:14.650991    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:50:14.651158    7543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:50:14.651481    7543 mustload.go:65] Loading cluster: ha-365000
I1007 04:50:14.651678    7543 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:50:14.656386    7543 out.go:201] 
W1007 04:50:14.660218    7543 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1007 04:50:14.660224    7543 out.go:270] * 
* 
W1007 04:50:14.662056    7543 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 04:50:14.666327    7543 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (34.52375ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:14.704127    7545 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:14.704293    7545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.704296    7545 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:14.704298    7545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:14.704444    7545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:14.704571    7545 out.go:352] Setting JSON to false
	I1007 04:50:14.704581    7545 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:14.704635    7545 notify.go:220] Checking for updates...
	I1007 04:50:14.704783    7545 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:14.704791    7545 status.go:174] checking status of ha-365000 ...
	I1007 04:50:14.705028    7545 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:14.705031    7545 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:14.705033    7545 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:14.705960    6750 retry.go:31] will retry after 1.388834174s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (79.91625ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:16.175136    7547 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:16.175361    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:16.175365    7547 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:16.175368    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:16.175507    7547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:16.175657    7547 out.go:352] Setting JSON to false
	I1007 04:50:16.175671    7547 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:16.175713    7547 notify.go:220] Checking for updates...
	I1007 04:50:16.175928    7547 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:16.175937    7547 status.go:174] checking status of ha-365000 ...
	I1007 04:50:16.176235    7547 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:16.176240    7547 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:16.176243    7547 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:16.177233    6750 retry.go:31] will retry after 951.053202ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (80.132917ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:17.208553    7549 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:17.208794    7549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:17.208798    7549 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:17.208801    7549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:17.208967    7549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:17.209120    7549 out.go:352] Setting JSON to false
	I1007 04:50:17.209134    7549 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:17.209180    7549 notify.go:220] Checking for updates...
	I1007 04:50:17.209373    7549 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:17.209383    7549 status.go:174] checking status of ha-365000 ...
	I1007 04:50:17.209687    7549 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:17.209692    7549 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:17.209694    7549 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:17.210765    6750 retry.go:31] will retry after 2.809873391s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (79.174042ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:20.100049    7551 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:20.100280    7551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:20.100283    7551 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:20.100286    7551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:20.100443    7551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:20.100585    7551 out.go:352] Setting JSON to false
	I1007 04:50:20.100598    7551 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:20.100629    7551 notify.go:220] Checking for updates...
	I1007 04:50:20.100864    7551 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:20.100875    7551 status.go:174] checking status of ha-365000 ...
	I1007 04:50:20.101165    7551 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:20.101170    7551 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:20.101172    7551 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:20.102188    6750 retry.go:31] will retry after 1.902004056s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (78.349458ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:22.082754    7553 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:22.082969    7553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:22.082972    7553 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:22.082976    7553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:22.083163    7553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:22.083317    7553 out.go:352] Setting JSON to false
	I1007 04:50:22.083330    7553 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:22.083402    7553 notify.go:220] Checking for updates...
	I1007 04:50:22.083598    7553 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:22.083611    7553 status.go:174] checking status of ha-365000 ...
	I1007 04:50:22.083909    7553 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:22.083914    7553 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:22.083916    7553 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:22.084942    6750 retry.go:31] will retry after 6.064484119s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (81.240583ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:28.230892    7555 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:28.231087    7555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:28.231091    7555 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:28.231094    7555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:28.231294    7555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:28.231445    7555 out.go:352] Setting JSON to false
	I1007 04:50:28.231457    7555 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:28.231487    7555 notify.go:220] Checking for updates...
	I1007 04:50:28.231712    7555 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:28.231722    7555 status.go:174] checking status of ha-365000 ...
	I1007 04:50:28.232014    7555 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:28.232019    7555 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:28.232022    7555 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:28.233087    6750 retry.go:31] will retry after 9.048531944s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (78.689917ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:37.359572    7560 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:37.359831    7560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:37.359835    7560 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:37.359838    7560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:37.360001    7560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:37.360158    7560 out.go:352] Setting JSON to false
	I1007 04:50:37.360172    7560 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:37.360208    7560 notify.go:220] Checking for updates...
	I1007 04:50:37.360444    7560 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:37.360453    7560 status.go:174] checking status of ha-365000 ...
	I1007 04:50:37.360768    7560 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:37.360772    7560 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:37.360774    7560 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:37.361875    6750 retry.go:31] will retry after 10.826901534s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (79.28425ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:50:48.268203    7564 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:50:48.268438    7564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:48.268442    7564 out.go:358] Setting ErrFile to fd 2...
	I1007 04:50:48.268445    7564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:50:48.268614    7564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:50:48.268773    7564 out.go:352] Setting JSON to false
	I1007 04:50:48.268793    7564 mustload.go:65] Loading cluster: ha-365000
	I1007 04:50:48.268831    7564 notify.go:220] Checking for updates...
	I1007 04:50:48.269845    7564 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:50:48.269874    7564 status.go:174] checking status of ha-365000 ...
	I1007 04:50:48.270336    7564 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:50:48.270343    7564 status.go:384] host is not running, skipping remaining checks
	I1007 04:50:48.270346    7564 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:50:48.271513    6750 retry.go:31] will retry after 16.246467004s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (78.602833ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:04.596786    7569 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:04.596995    7569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:04.596999    7569 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:04.597003    7569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:04.597170    7569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:04.597322    7569 out.go:352] Setting JSON to false
	I1007 04:51:04.597335    7569 mustload.go:65] Loading cluster: ha-365000
	I1007 04:51:04.597379    7569 notify.go:220] Checking for updates...
	I1007 04:51:04.597614    7569 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:04.597622    7569 status.go:174] checking status of ha-365000 ...
	I1007 04:51:04.597929    7569 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:51:04.597933    7569 status.go:384] host is not running, skipping remaining checks
	I1007 04:51:04.597936    7569 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (36.885708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-365000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-365000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.574459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-365000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-365000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-365000 -v=7 --alsologtostderr: (3.054805292s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.2232045s)

                                                
                                                
-- stdout --
	* [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:07.883301    7598 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:07.883479    7598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:07.883483    7598 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:07.883486    7598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:07.883652    7598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:07.884875    7598 out.go:352] Setting JSON to false
	I1007 04:51:07.904302    7598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:51:07.904379    7598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:51:07.908151    7598 out.go:177] * [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:51:07.914877    7598 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:51:07.914925    7598 notify.go:220] Checking for updates...
	I1007 04:51:07.920836    7598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:51:07.923857    7598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:51:07.925071    7598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:51:07.927816    7598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:51:07.930854    7598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:51:07.934183    7598 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:07.934231    7598 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:51:07.938822    7598 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:51:07.945854    7598 start.go:297] selected driver: qemu2
	I1007 04:51:07.945860    7598 start.go:901] validating driver "qemu2" against &{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:51:07.945924    7598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:51:07.948282    7598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:51:07.948305    7598 cni.go:84] Creating CNI manager for ""
	I1007 04:51:07.948328    7598 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 04:51:07.948369    7598 start.go:340] cluster config:
	{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:51:07.952868    7598 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:51:07.960838    7598 out.go:177] * Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	I1007 04:51:07.964744    7598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:51:07.964761    7598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:51:07.964776    7598 cache.go:56] Caching tarball of preloaded images
	I1007 04:51:07.964851    7598 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:51:07.964857    7598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:51:07.964910    7598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/ha-365000/config.json ...
	I1007 04:51:07.965312    7598 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:51:07.965362    7598 start.go:364] duration metric: took 43.292µs to acquireMachinesLock for "ha-365000"
	I1007 04:51:07.965371    7598 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:51:07.965376    7598 fix.go:54] fixHost starting: 
	I1007 04:51:07.965501    7598 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W1007 04:51:07.965511    7598 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:51:07.973819    7598 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I1007 04:51:07.977829    7598 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:51:07.977890    7598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:26:b5:c6:aa:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:51:07.980216    7598 main.go:141] libmachine: STDOUT: 
	I1007 04:51:07.980237    7598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:51:07.980271    7598 fix.go:56] duration metric: took 14.893459ms for fixHost
	I1007 04:51:07.980275    7598 start.go:83] releasing machines lock for "ha-365000", held for 14.909333ms
	W1007 04:51:07.980281    7598 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:51:07.980321    7598 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:51:07.980326    7598 start.go:729] Will try again in 5 seconds ...
	I1007 04:51:12.982505    7598 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:51:12.982899    7598 start.go:364] duration metric: took 304.125µs to acquireMachinesLock for "ha-365000"
	I1007 04:51:12.982999    7598 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:51:12.983015    7598 fix.go:54] fixHost starting: 
	I1007 04:51:12.983684    7598 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W1007 04:51:12.983711    7598 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:51:12.988142    7598 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I1007 04:51:12.994057    7598 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:51:12.994237    7598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:26:b5:c6:aa:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:51:13.004261    7598 main.go:141] libmachine: STDOUT: 
	I1007 04:51:13.004317    7598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:51:13.004394    7598 fix.go:56] duration metric: took 21.378917ms for fixHost
	I1007 04:51:13.004409    7598 start.go:83] releasing machines lock for "ha-365000", held for 21.488792ms
	W1007 04:51:13.004564    7598 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:51:13.011031    7598 out.go:201] 
	W1007 04:51:13.015130    7598 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:51:13.015173    7598 out.go:270] * 
	* 
	W1007 04:51:13.017266    7598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:51:13.025104    7598 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-365000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-365000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (35.447792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr: exit status 83 (45.08425ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:13.183127    7610 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:13.183596    7610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:13.183601    7610 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:13.183603    7610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:13.183767    7610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:13.183976    7610 mustload.go:65] Loading cluster: ha-365000
	I1007 04:51:13.184201    7610 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:13.187928    7610 out.go:177] * The control-plane node ha-365000 host is not running: state=Stopped
	I1007 04:51:13.191051    7610 out.go:177]   To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (34.396416ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:13.227653    7612 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:13.227827    7612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:13.227831    7612 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:13.227833    7612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:13.227966    7612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:13.228088    7612 out.go:352] Setting JSON to false
	I1007 04:51:13.228098    7612 mustload.go:65] Loading cluster: ha-365000
	I1007 04:51:13.228161    7612 notify.go:220] Checking for updates...
	I1007 04:51:13.228312    7612 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:13.228320    7612 status.go:174] checking status of ha-365000 ...
	I1007 04:51:13.228572    7612 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:51:13.228575    7612 status.go:384] host is not running, skipping remaining checks
	I1007 04:51:13.228577    7612 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.965583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-365000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.627833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-365000 stop -v=7 --alsologtostderr: (1.879709708s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (73.046042ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:15.302860    7631 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:15.303077    7631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:15.303081    7631 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:15.303084    7631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:15.303237    7631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:15.303395    7631 out.go:352] Setting JSON to false
	I1007 04:51:15.303408    7631 mustload.go:65] Loading cluster: ha-365000
	I1007 04:51:15.303445    7631 notify.go:220] Checking for updates...
	I1007 04:51:15.303654    7631 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:15.303667    7631 status.go:174] checking status of ha-365000 ...
	I1007 04:51:15.303954    7631 status.go:371] ha-365000 host status = "Stopped" (err=<nil>)
	I1007 04:51:15.303959    7631 status.go:384] host is not running, skipping remaining checks
	I1007 04:51:15.303962    7631 status.go:176] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (35.935458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.192408833s)

                                                
                                                
-- stdout --
	* [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:15.373836    7635 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:15.373996    7635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:15.374000    7635 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:15.374003    7635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:15.374128    7635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:15.375221    7635 out.go:352] Setting JSON to false
	I1007 04:51:15.392769    7635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4846,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:51:15.392837    7635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:51:15.397956    7635 out.go:177] * [ha-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:51:15.404873    7635 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:51:15.404905    7635 notify.go:220] Checking for updates...
	I1007 04:51:15.411882    7635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:51:15.414806    7635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:51:15.417844    7635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:51:15.420873    7635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:51:15.423810    7635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:51:15.427125    7635 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:15.427412    7635 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:51:15.431502    7635 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:51:15.439857    7635 start.go:297] selected driver: qemu2
	I1007 04:51:15.439863    7635 start.go:901] validating driver "qemu2" against &{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:51:15.439921    7635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:51:15.442390    7635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:51:15.442418    7635 cni.go:84] Creating CNI manager for ""
	I1007 04:51:15.442438    7635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 04:51:15.442488    7635 start.go:340] cluster config:
	{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365000 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:51:15.446897    7635 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:51:15.454890    7635 out.go:177] * Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	I1007 04:51:15.458859    7635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:51:15.458875    7635 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:51:15.458889    7635 cache.go:56] Caching tarball of preloaded images
	I1007 04:51:15.458945    7635 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:51:15.458951    7635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:51:15.459017    7635 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/ha-365000/config.json ...
	I1007 04:51:15.459351    7635 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:51:15.459381    7635 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "ha-365000"
	I1007 04:51:15.459391    7635 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:51:15.459396    7635 fix.go:54] fixHost starting: 
	I1007 04:51:15.459518    7635 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W1007 04:51:15.459527    7635 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:51:15.467818    7635 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I1007 04:51:15.471789    7635 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:51:15.471823    7635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:26:b5:c6:aa:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:51:15.474036    7635 main.go:141] libmachine: STDOUT: 
	I1007 04:51:15.474054    7635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:51:15.474086    7635 fix.go:56] duration metric: took 14.688458ms for fixHost
	I1007 04:51:15.474091    7635 start.go:83] releasing machines lock for "ha-365000", held for 14.706125ms
	W1007 04:51:15.474097    7635 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:51:15.474160    7635 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:51:15.474165    7635 start.go:729] Will try again in 5 seconds ...
	I1007 04:51:20.476324    7635 start.go:360] acquireMachinesLock for ha-365000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:51:20.476724    7635 start.go:364] duration metric: took 288.708µs to acquireMachinesLock for "ha-365000"
	I1007 04:51:20.476860    7635 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:51:20.476881    7635 fix.go:54] fixHost starting: 
	I1007 04:51:20.477575    7635 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W1007 04:51:20.477607    7635 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:51:20.486069    7635 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I1007 04:51:20.490029    7635 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:51:20.490229    7635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:26:b5:c6:aa:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/ha-365000/disk.qcow2
	I1007 04:51:20.500306    7635 main.go:141] libmachine: STDOUT: 
	I1007 04:51:20.500362    7635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:51:20.500422    7635 fix.go:56] duration metric: took 23.545875ms for fixHost
	I1007 04:51:20.500441    7635 start.go:83] releasing machines lock for "ha-365000", held for 23.69525ms
	W1007 04:51:20.500683    7635 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:51:20.507031    7635 out.go:201] 
	W1007 04:51:20.511114    7635 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:51:20.511138    7635 out.go:270] * 
	* 
	W1007 04:51:20.513775    7635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:51:20.520993    7635 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (74.516917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-365000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (36.965042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-365000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-365000 --control-plane -v=7 --alsologtostderr: exit status 83 (46.625458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:51:20.734596    7650 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:51:20.734783    7650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:20.734786    7650 out.go:358] Setting ErrFile to fd 2...
	I1007 04:51:20.734789    7650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:51:20.734933    7650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:51:20.735184    7650 mustload.go:65] Loading cluster: ha-365000
	I1007 04:51:20.735396    7650 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:51:20.739602    7650 out.go:177] * The control-plane node ha-365000 host is not running: state=Stopped
	I1007 04:51:20.743595    7650 out.go:177]   To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-365000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.387292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-365000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-365000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.390333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-510000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-510000 --driver=qemu2 : exit status 80 (9.842617375s)

                                                
                                                
-- stdout --
	* [image-510000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-510000" primary control-plane node in "image-510000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-510000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-510000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-510000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-510000 -n image-510000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-510000 -n image-510000: exit status 7 (74.5265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-510000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-439000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-439000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.711777958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"163bd15c-9a2f-4aae-9ac8-8160719e0ed1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-439000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67515978-1b56-4f1d-b220-95e222ec3841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"6de6b4af-f50d-4553-9cee-c480afd63293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig"}}
	{"specversion":"1.0","id":"5a7ce83a-6341-4f2e-b9cd-4ccd6763fc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9a440cd7-674c-4849-b910-8a2e1a92638f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"28b43d67-38aa-4662-9b53-9121595f4a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube"}}
	{"specversion":"1.0","id":"f71f90c8-bae6-4c98-b8f6-f422a48f48c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9da6379b-192c-4a9a-9fac-94b28b981f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3dfa5cbe-dbbb-43ca-9295-da946e782fb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"88ddbd59-7bf8-4065-9800-3e8c03402819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-439000\" primary control-plane node in \"json-output-439000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0598b927-80a4-431f-8561-72e031ca9283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9d0a11fd-4309-47b7-8818-d00fe73403c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-439000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd49bf96-fd61-4a44-8569-b1cf69bff1df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"78e28f9c-9be2-4201-950d-f31a13fda0cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"306d5516-49d3-49e5-ae66-470b81b51420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-439000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1847402b-b63a-4b56-90b0-17685f88afac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"fbcb0eeb-62e4-4f59-9d2d-28c5725a542e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-439000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-439000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-439000 --output=json --user=testUser: exit status 83 (85.964ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f08c3770-928d-453b-ada7-5676fd08e9c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-439000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"26d4e6fc-eac3-46bc-8e75-a41543074d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-439000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-439000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-439000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-439000 --output=json --user=testUser: exit status 83 (48.289459ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-439000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-439000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-439000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-439000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-918000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-918000 --driver=qemu2 : exit status 80 (9.788124458s)

                                                
                                                
-- stdout --
	* [first-918000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-918000" primary control-plane node in "first-918000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-918000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-918000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-07 04:51:53.134685 -0700 PDT m=+503.348440960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-920000 -n second-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-920000 -n second-920000: exit status 85 (88.635459ms)

                                                
                                                
-- stdout --
	* Profile "second-920000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-920000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-920000" host is not running, skipping log retrieval (state="* Profile \"second-920000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-920000\"")
helpers_test.go:175: Cleaning up "second-920000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-920000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-07 04:51:53.344389 -0700 PDT m=+503.558145376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-918000 -n first-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-918000 -n first-918000: exit status 7 (35.04175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-918000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-918000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-486000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-486000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.46777975s)

                                                
                                                
-- stdout --
	* [mount-start-1-486000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-486000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-486000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-486000 -n mount-start-1-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-486000 -n mount-start-1-486000: exit status 7 (75.248625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-328000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-328000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.941394958s)

                                                
                                                
-- stdout --
	* [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-328000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:52:04.223579    7789 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:52:04.223743    7789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:52:04.223746    7789 out.go:358] Setting ErrFile to fd 2...
	I1007 04:52:04.223749    7789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:52:04.223885    7789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:52:04.225019    7789 out.go:352] Setting JSON to false
	I1007 04:52:04.242976    7789 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4895,"bootTime":1728297029,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:52:04.243044    7789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:52:04.249359    7789 out.go:177] * [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:52:04.257306    7789 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:52:04.257363    7789 notify.go:220] Checking for updates...
	I1007 04:52:04.264357    7789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:52:04.267245    7789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:52:04.270303    7789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:52:04.273324    7789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:52:04.276248    7789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:52:04.279503    7789 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:52:04.283316    7789 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:52:04.290281    7789 start.go:297] selected driver: qemu2
	I1007 04:52:04.290286    7789 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:52:04.290291    7789 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:52:04.292768    7789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:52:04.296294    7789 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:52:04.299352    7789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:52:04.299374    7789 cni.go:84] Creating CNI manager for ""
	I1007 04:52:04.299401    7789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 04:52:04.299406    7789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 04:52:04.299445    7789 start.go:340] cluster config:
	{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:52:04.304386    7789 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:52:04.312295    7789 out.go:177] * Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	I1007 04:52:04.316298    7789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:52:04.316315    7789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:52:04.316324    7789 cache.go:56] Caching tarball of preloaded images
	I1007 04:52:04.316420    7789 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:52:04.316425    7789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:52:04.316670    7789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/multinode-328000/config.json ...
	I1007 04:52:04.316682    7789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/multinode-328000/config.json: {Name:mk2b56d5a925c000c4974a9939c0a34a9355bd97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:52:04.317069    7789 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:52:04.317122    7789 start.go:364] duration metric: took 47.542µs to acquireMachinesLock for "multinode-328000"
	I1007 04:52:04.317136    7789 start.go:93] Provisioning new machine with config: &{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:52:04.317171    7789 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:52:04.325301    7789 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:52:04.343459    7789 start.go:159] libmachine.API.Create for "multinode-328000" (driver="qemu2")
	I1007 04:52:04.343488    7789 client.go:168] LocalClient.Create starting
	I1007 04:52:04.343562    7789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:52:04.343602    7789 main.go:141] libmachine: Decoding PEM data...
	I1007 04:52:04.343615    7789 main.go:141] libmachine: Parsing certificate...
	I1007 04:52:04.343668    7789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:52:04.343700    7789 main.go:141] libmachine: Decoding PEM data...
	I1007 04:52:04.343710    7789 main.go:141] libmachine: Parsing certificate...
	I1007 04:52:04.344162    7789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:52:04.485735    7789 main.go:141] libmachine: Creating SSH key...
	I1007 04:52:04.640479    7789 main.go:141] libmachine: Creating Disk image...
	I1007 04:52:04.640488    7789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:52:04.640687    7789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:04.650992    7789 main.go:141] libmachine: STDOUT: 
	I1007 04:52:04.651007    7789 main.go:141] libmachine: STDERR: 
	I1007 04:52:04.651062    7789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2 +20000M
	I1007 04:52:04.659471    7789 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:52:04.659485    7789 main.go:141] libmachine: STDERR: 
	I1007 04:52:04.659502    7789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:04.659506    7789 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:52:04.659516    7789 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:52:04.659552    7789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:28:ad:58:fa:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:04.661510    7789 main.go:141] libmachine: STDOUT: 
	I1007 04:52:04.661533    7789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:52:04.661555    7789 client.go:171] duration metric: took 318.060833ms to LocalClient.Create
	I1007 04:52:06.663730    7789 start.go:128] duration metric: took 2.346543s to createHost
	I1007 04:52:06.663785    7789 start.go:83] releasing machines lock for "multinode-328000", held for 2.346658875s
	W1007 04:52:06.663833    7789 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:52:06.677921    7789 out.go:177] * Deleting "multinode-328000" in qemu2 ...
	W1007 04:52:06.701069    7789 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:52:06.701097    7789 start.go:729] Will try again in 5 seconds ...
	I1007 04:52:11.703234    7789 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:52:11.703839    7789 start.go:364] duration metric: took 479.583µs to acquireMachinesLock for "multinode-328000"
	I1007 04:52:11.703969    7789 start.go:93] Provisioning new machine with config: &{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:52:11.704228    7789 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:52:11.717857    7789 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:52:11.766421    7789 start.go:159] libmachine.API.Create for "multinode-328000" (driver="qemu2")
	I1007 04:52:11.766467    7789 client.go:168] LocalClient.Create starting
	I1007 04:52:11.766601    7789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:52:11.766681    7789 main.go:141] libmachine: Decoding PEM data...
	I1007 04:52:11.766701    7789 main.go:141] libmachine: Parsing certificate...
	I1007 04:52:11.766767    7789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:52:11.766822    7789 main.go:141] libmachine: Decoding PEM data...
	I1007 04:52:11.766837    7789 main.go:141] libmachine: Parsing certificate...
	I1007 04:52:11.767498    7789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:52:11.921195    7789 main.go:141] libmachine: Creating SSH key...
	I1007 04:52:12.062841    7789 main.go:141] libmachine: Creating Disk image...
	I1007 04:52:12.062847    7789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:52:12.063056    7789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:12.073573    7789 main.go:141] libmachine: STDOUT: 
	I1007 04:52:12.073592    7789 main.go:141] libmachine: STDERR: 
	I1007 04:52:12.073651    7789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2 +20000M
	I1007 04:52:12.082086    7789 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:52:12.082127    7789 main.go:141] libmachine: STDERR: 
	I1007 04:52:12.082147    7789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:12.082152    7789 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:52:12.082162    7789 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:52:12.082193    7789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:e7:46:31:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:52:12.084048    7789 main.go:141] libmachine: STDOUT: 
	I1007 04:52:12.084065    7789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:52:12.084078    7789 client.go:171] duration metric: took 317.605083ms to LocalClient.Create
	I1007 04:52:14.086304    7789 start.go:128] duration metric: took 2.382044s to createHost
	I1007 04:52:14.086406    7789 start.go:83] releasing machines lock for "multinode-328000", held for 2.382536s
	W1007 04:52:14.086738    7789 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:52:14.100475    7789 out.go:201] 
	W1007 04:52:14.104657    7789 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:52:14.104719    7789 out.go:270] * 
	* 
	W1007 04:52:14.107887    7789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:52:14.117492    7789 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-328000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (73.744667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (65.84975ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-328000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- rollout status deployment/busybox: exit status 1 (61.785625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.794916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:14.397909    6750 retry.go:31] will retry after 1.263117374s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.163084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:15.772591    6750 retry.go:31] will retry after 1.524292578s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.512208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:17.408691    6750 retry.go:31] will retry after 3.293856156s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.981708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:20.814890    6750 retry.go:31] will retry after 2.444584802s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.003125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:23.370860    6750 retry.go:31] will retry after 4.402414558s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.55975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:27.884151    6750 retry.go:31] will retry after 10.61152483s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.390708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:38.608399    6750 retry.go:31] will retry after 7.081813933s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.661792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:52:45.801297    6750 retry.go:31] will retry after 17.879667548s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.469042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:53:03.790807    6750 retry.go:31] will retry after 26.438600003s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.05ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 04:53:30.341817    6750 retry.go:31] will retry after 30.3495949s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.379208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.646167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.829583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.271166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.671084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-328000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.16675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.691125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-328000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-328000 -v 3 --alsologtostderr: exit status 83 (44.663959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-328000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-328000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:01.216287    7872 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:01.216495    7872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.216498    7872 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:01.216501    7872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.216621    7872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:01.216856    7872 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:01.217065    7872 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:01.221044    7872 out.go:177] * The control-plane node multinode-328000 host is not running: state=Stopped
	I1007 04:54:01.223935    7872 out.go:177]   To start a cluster, run: "minikube start -p multinode-328000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-328000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.424125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-328000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-328000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.937875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-328000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-328000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-328000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.562917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-328000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-328000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-328000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVM
NUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-328000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status --output json --alsologtostderr: exit status 7 (34.696292ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-328000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:01.448355    7884 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:01.448541    7884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.448545    7884 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:01.448547    7884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.448669    7884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:01.448790    7884 out.go:352] Setting JSON to true
	I1007 04:54:01.448801    7884 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:01.448872    7884 notify.go:220] Checking for updates...
	I1007 04:54:01.449021    7884 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:01.449030    7884 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:01.449278    7884 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:01.449282    7884 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:01.449284    7884 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-328000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.2055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 node stop m03: exit status 85 (51.728458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-328000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status: exit status 7 (35.55ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr: exit status 7 (34.252542ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:01.604959    7892 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:01.605148    7892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.605151    7892 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:01.605153    7892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.605281    7892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:01.605415    7892 out.go:352] Setting JSON to false
	I1007 04:54:01.605426    7892 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:01.605475    7892 notify.go:220] Checking for updates...
	I1007 04:54:01.605657    7892 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:01.605664    7892 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:01.605897    7892 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:01.605901    7892 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:01.605903    7892 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr": multinode-328000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.718333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.976292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:01.674670    7896 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:01.675188    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.675192    7896 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:01.675195    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.675371    7896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:01.675586    7896 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:01.675804    7896 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:01.680107    7896 out.go:201] 
	W1007 04:54:01.684040    7896 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1007 04:54:01.684045    7896 out.go:270] * 
	* 
	W1007 04:54:01.685876    7896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:54:01.690022    7896 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1007 04:54:01.674670    7896 out.go:345] Setting OutFile to fd 1 ...
I1007 04:54:01.675188    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:54:01.675192    7896 out.go:358] Setting ErrFile to fd 2...
I1007 04:54:01.675195    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 04:54:01.675371    7896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
I1007 04:54:01.675586    7896 mustload.go:65] Loading cluster: multinode-328000
I1007 04:54:01.675804    7896 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 04:54:01.680107    7896 out.go:201] 
W1007 04:54:01.684040    7896 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1007 04:54:01.684045    7896 out.go:270] * 
* 
W1007 04:54:01.685876    7896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 04:54:01.690022    7896 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-328000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (34.49875ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:01.726584    7898 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:01.726752    7898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.726755    7898 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:01.726757    7898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:01.726890    7898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:01.727012    7898 out.go:352] Setting JSON to false
	I1007 04:54:01.727023    7898 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:01.727089    7898 notify.go:220] Checking for updates...
	I1007 04:54:01.727247    7898 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:01.727259    7898 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:01.727537    7898 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:01.727541    7898 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:01.727543    7898 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:01.728467    6750 retry.go:31] will retry after 663.289592ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (77.560875ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:02.469619    7900 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:02.469819    7900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:02.469823    7900 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:02.469827    7900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:02.469993    7900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:02.470138    7900 out.go:352] Setting JSON to false
	I1007 04:54:02.470151    7900 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:02.470185    7900 notify.go:220] Checking for updates...
	I1007 04:54:02.470402    7900 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:02.470411    7900 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:02.470716    7900 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:02.470720    7900 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:02.470723    7900 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:02.471710    6750 retry.go:31] will retry after 1.971587838s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (79.081958ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:04.522559    7902 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:04.522794    7902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:04.522798    7902 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:04.522801    7902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:04.522988    7902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:04.523156    7902 out.go:352] Setting JSON to false
	I1007 04:54:04.523169    7902 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:04.523209    7902 notify.go:220] Checking for updates...
	I1007 04:54:04.523438    7902 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:04.523447    7902 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:04.523739    7902 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:04.523743    7902 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:04.523746    7902 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:04.524756    6750 retry.go:31] will retry after 2.334549551s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (79.546334ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:06.939038    7904 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:06.939218    7904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:06.939222    7904 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:06.939226    7904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:06.939417    7904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:06.939564    7904 out.go:352] Setting JSON to false
	I1007 04:54:06.939577    7904 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:06.939629    7904 notify.go:220] Checking for updates...
	I1007 04:54:06.939856    7904 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:06.939866    7904 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:06.940142    7904 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:06.940147    7904 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:06.940149    7904 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:06.941183    6750 retry.go:31] will retry after 2.624518275s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (80.862209ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:09.646825    7906 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:09.647061    7906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:09.647065    7906 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:09.647069    7906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:09.647235    7906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:09.647413    7906 out.go:352] Setting JSON to false
	I1007 04:54:09.647426    7906 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:09.647469    7906 notify.go:220] Checking for updates...
	I1007 04:54:09.647712    7906 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:09.647722    7906 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:09.648010    7906 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:09.648014    7906 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:09.648016    7906 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:09.649011    6750 retry.go:31] will retry after 2.611341548s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (80.270833ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:12.340784    7908 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:12.341012    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:12.341016    7908 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:12.341019    7908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:12.341184    7908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:12.341355    7908 out.go:352] Setting JSON to false
	I1007 04:54:12.341368    7908 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:12.341402    7908 notify.go:220] Checking for updates...
	I1007 04:54:12.341626    7908 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:12.341636    7908 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:12.341943    7908 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:12.341947    7908 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:12.341949    7908 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:12.342999    6750 retry.go:31] will retry after 6.103651863s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (80.631041ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:18.527366    7910 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:18.527598    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:18.527602    7910 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:18.527605    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:18.527765    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:18.527914    7910 out.go:352] Setting JSON to false
	I1007 04:54:18.527930    7910 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:18.527969    7910 notify.go:220] Checking for updates...
	I1007 04:54:18.528193    7910 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:18.528203    7910 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:18.528496    7910 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:18.528501    7910 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:18.528503    7910 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:18.529582    6750 retry.go:31] will retry after 13.329795698s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (78.741833ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:31.938518    7915 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:31.938717    7915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:31.938722    7915 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:31.938725    7915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:31.938877    7915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:31.939021    7915 out.go:352] Setting JSON to false
	I1007 04:54:31.939034    7915 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:31.939085    7915 notify.go:220] Checking for updates...
	I1007 04:54:31.939302    7915 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:31.939311    7915 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:31.939602    7915 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:31.939606    7915 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:31.939609    7915 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 04:54:31.940616    6750 retry.go:31] will retry after 25.40167063s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr: exit status 7 (79.695042ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:54:57.422331    7917 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:54:57.422549    7917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:57.422552    7917 out.go:358] Setting ErrFile to fd 2...
	I1007 04:54:57.422555    7917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:54:57.422701    7917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:54:57.422838    7917 out.go:352] Setting JSON to false
	I1007 04:54:57.422851    7917 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:54:57.422891    7917 notify.go:220] Checking for updates...
	I1007 04:54:57.423098    7917 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:54:57.423107    7917 status.go:174] checking status of multinode-328000 ...
	I1007 04:54:57.423407    7917 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:54:57.423411    7917 status.go:384] host is not running, skipping remaining checks
	I1007 04:54:57.423414    7917 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-328000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (36.381458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-328000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-328000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-328000: (3.626325959s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-328000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-328000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229271708s)

                                                
                                                
-- stdout --
	* [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	* Restarting existing qemu2 VM for "multinode-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:55:01.189981    7944 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:55:01.190188    7944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:01.190192    7944 out.go:358] Setting ErrFile to fd 2...
	I1007 04:55:01.190196    7944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:01.190361    7944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:55:01.191619    7944 out.go:352] Setting JSON to false
	I1007 04:55:01.210812    7944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5072,"bootTime":1728297029,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:55:01.210884    7944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:55:01.215733    7944 out.go:177] * [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:55:01.222582    7944 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:55:01.222655    7944 notify.go:220] Checking for updates...
	I1007 04:55:01.227939    7944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:55:01.230585    7944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:55:01.233579    7944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:55:01.236591    7944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:55:01.239562    7944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:55:01.242970    7944 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:55:01.243018    7944 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:55:01.247579    7944 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:55:01.254568    7944 start.go:297] selected driver: qemu2
	I1007 04:55:01.254573    7944 start.go:901] validating driver "qemu2" against &{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:55:01.254634    7944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:55:01.257084    7944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:55:01.257106    7944 cni.go:84] Creating CNI manager for ""
	I1007 04:55:01.257132    7944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 04:55:01.257174    7944 start.go:340] cluster config:
	{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:55:01.261597    7944 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:01.269530    7944 out.go:177] * Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	I1007 04:55:01.273578    7944 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:55:01.273594    7944 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:55:01.273600    7944 cache.go:56] Caching tarball of preloaded images
	I1007 04:55:01.273740    7944 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:55:01.273760    7944 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:55:01.273835    7944 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/multinode-328000/config.json ...
	I1007 04:55:01.274319    7944 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:01.274371    7944 start.go:364] duration metric: took 46.041µs to acquireMachinesLock for "multinode-328000"
	I1007 04:55:01.274380    7944 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:55:01.274385    7944 fix.go:54] fixHost starting: 
	I1007 04:55:01.274509    7944 fix.go:112] recreateIfNeeded on multinode-328000: state=Stopped err=<nil>
	W1007 04:55:01.274518    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:55:01.282611    7944 out.go:177] * Restarting existing qemu2 VM for "multinode-328000" ...
	I1007 04:55:01.286527    7944 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:01.286575    7944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:e7:46:31:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:55:01.288740    7944 main.go:141] libmachine: STDOUT: 
	I1007 04:55:01.288755    7944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:01.288783    7944 fix.go:56] duration metric: took 14.3955ms for fixHost
	I1007 04:55:01.288786    7944 start.go:83] releasing machines lock for "multinode-328000", held for 14.410833ms
	W1007 04:55:01.288791    7944 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:55:01.288825    7944 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:01.288829    7944 start.go:729] Will try again in 5 seconds ...
	I1007 04:55:06.291002    7944 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:06.291514    7944 start.go:364] duration metric: took 390.209µs to acquireMachinesLock for "multinode-328000"
	I1007 04:55:06.291677    7944 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:55:06.291699    7944 fix.go:54] fixHost starting: 
	I1007 04:55:06.292384    7944 fix.go:112] recreateIfNeeded on multinode-328000: state=Stopped err=<nil>
	W1007 04:55:06.292412    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:55:06.299905    7944 out.go:177] * Restarting existing qemu2 VM for "multinode-328000" ...
	I1007 04:55:06.303882    7944 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:06.304080    7944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:e7:46:31:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:55:06.315036    7944 main.go:141] libmachine: STDOUT: 
	I1007 04:55:06.315091    7944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:06.315169    7944 fix.go:56] duration metric: took 23.472791ms for fixHost
	I1007 04:55:06.315185    7944 start.go:83] releasing machines lock for "multinode-328000", held for 23.647541ms
	W1007 04:55:06.315364    7944 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:06.323847    7944 out.go:201] 
	W1007 04:55:06.327965    7944 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:55:06.327983    7944 out.go:270] * 
	* 
	W1007 04:55:06.329767    7944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:55:06.338739    7944 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-328000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-328000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (36.67725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 node delete m03: exit status 83 (47.736ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-328000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-328000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-328000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr: exit status 7 (35.361ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:55:06.544087    7958 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:55:06.544261    7958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:06.544264    7958 out.go:358] Setting ErrFile to fd 2...
	I1007 04:55:06.544266    7958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:06.544417    7958 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:55:06.544545    7958 out.go:352] Setting JSON to false
	I1007 04:55:06.544556    7958 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:55:06.544628    7958 notify.go:220] Checking for updates...
	I1007 04:55:06.545762    7958 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:55:06.545772    7958 status.go:174] checking status of multinode-328000 ...
	I1007 04:55:06.546002    7958 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:55:06.546006    7958 status.go:384] host is not running, skipping remaining checks
	I1007 04:55:06.546008    7958 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.550541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-328000 stop: (4.015904792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status: exit status 7 (70.765042ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr: exit status 7 (35.346834ms)

                                                
                                                
-- stdout --
	multinode-328000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:55:10.702335    7984 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:55:10.702512    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:10.702515    7984 out.go:358] Setting ErrFile to fd 2...
	I1007 04:55:10.702517    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:10.702635    7984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:55:10.702757    7984 out.go:352] Setting JSON to false
	I1007 04:55:10.702769    7984 mustload.go:65] Loading cluster: multinode-328000
	I1007 04:55:10.702825    7984 notify.go:220] Checking for updates...
	I1007 04:55:10.702979    7984 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:55:10.702988    7984 status.go:174] checking status of multinode-328000 ...
	I1007 04:55:10.703243    7984 status.go:371] multinode-328000 host status = "Stopped" (err=<nil>)
	I1007 04:55:10.703247    7984 status.go:384] host is not running, skipping remaining checks
	I1007 04:55:10.703249    7984 status.go:176] multinode-328000 status: &{Name:multinode-328000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr": multinode-328000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-328000 status --alsologtostderr": multinode-328000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (34.486833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-328000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-328000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.187581166s)

                                                
                                                
-- stdout --
	* [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	* Restarting existing qemu2 VM for "multinode-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:55:10.771226    7988 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:55:10.771373    7988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:10.771376    7988 out.go:358] Setting ErrFile to fd 2...
	I1007 04:55:10.771378    7988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:10.771498    7988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:55:10.772565    7988 out.go:352] Setting JSON to false
	I1007 04:55:10.790028    7988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5081,"bootTime":1728297029,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:55:10.790120    7988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:55:10.794686    7988 out.go:177] * [multinode-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:55:10.801641    7988 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:55:10.801686    7988 notify.go:220] Checking for updates...
	I1007 04:55:10.808561    7988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:55:10.811573    7988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:55:10.814625    7988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:55:10.817545    7988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:55:10.820555    7988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:55:10.823951    7988 config.go:182] Loaded profile config "multinode-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:55:10.824234    7988 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:55:10.828600    7988 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:55:10.835574    7988 start.go:297] selected driver: qemu2
	I1007 04:55:10.835579    7988 start.go:901] validating driver "qemu2" against &{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:55:10.835630    7988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:55:10.838103    7988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:55:10.838133    7988 cni.go:84] Creating CNI manager for ""
	I1007 04:55:10.838155    7988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 04:55:10.838200    7988 start.go:340] cluster config:
	{Name:multinode-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-328000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:55:10.842654    7988 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:10.850612    7988 out.go:177] * Starting "multinode-328000" primary control-plane node in "multinode-328000" cluster
	I1007 04:55:10.854553    7988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:55:10.854570    7988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:55:10.854578    7988 cache.go:56] Caching tarball of preloaded images
	I1007 04:55:10.854677    7988 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:55:10.854685    7988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 04:55:10.854770    7988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/multinode-328000/config.json ...
	I1007 04:55:10.855240    7988 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:10.855277    7988 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "multinode-328000"
	I1007 04:55:10.855291    7988 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:55:10.855296    7988 fix.go:54] fixHost starting: 
	I1007 04:55:10.855424    7988 fix.go:112] recreateIfNeeded on multinode-328000: state=Stopped err=<nil>
	W1007 04:55:10.855436    7988 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:55:10.862574    7988 out.go:177] * Restarting existing qemu2 VM for "multinode-328000" ...
	I1007 04:55:10.866563    7988 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:10.866642    7988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:e7:46:31:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:55:10.869026    7988 main.go:141] libmachine: STDOUT: 
	I1007 04:55:10.869046    7988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:10.869078    7988 fix.go:56] duration metric: took 13.780583ms for fixHost
	I1007 04:55:10.869084    7988 start.go:83] releasing machines lock for "multinode-328000", held for 13.802333ms
	W1007 04:55:10.869090    7988 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:55:10.869144    7988 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:10.869149    7988 start.go:729] Will try again in 5 seconds ...
	I1007 04:55:15.871293    7988 start.go:360] acquireMachinesLock for multinode-328000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:15.871632    7988 start.go:364] duration metric: took 278.917µs to acquireMachinesLock for "multinode-328000"
	I1007 04:55:15.871748    7988 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:55:15.871767    7988 fix.go:54] fixHost starting: 
	I1007 04:55:15.872431    7988 fix.go:112] recreateIfNeeded on multinode-328000: state=Stopped err=<nil>
	W1007 04:55:15.872458    7988 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:55:15.876940    7988 out.go:177] * Restarting existing qemu2 VM for "multinode-328000" ...
	I1007 04:55:15.880776    7988 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:15.880954    7988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:e7:46:31:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/multinode-328000/disk.qcow2
	I1007 04:55:15.890916    7988 main.go:141] libmachine: STDOUT: 
	I1007 04:55:15.890971    7988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:15.891035    7988 fix.go:56] duration metric: took 19.26575ms for fixHost
	I1007 04:55:15.891072    7988 start.go:83] releasing machines lock for "multinode-328000", held for 19.401417ms
	W1007 04:55:15.891224    7988 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:15.898716    7988 out.go:201] 
	W1007 04:55:15.902899    7988 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:55:15.902941    7988 out.go:270] * 
	* 
	W1007 04:55:15.905663    7988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:55:15.912843    7988 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-328000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (75.164958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-328000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-328000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-328000-m01 --driver=qemu2 : exit status 80 (9.907237083s)

                                                
                                                
-- stdout --
	* [multinode-328000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-328000-m01" primary control-plane node in "multinode-328000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-328000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-328000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-328000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-328000-m02 --driver=qemu2 : exit status 80 (9.935462042s)

                                                
                                                
-- stdout --
	* [multinode-328000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-328000-m02" primary control-plane node in "multinode-328000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-328000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-328000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-328000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-328000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-328000: exit status 83 (85.5975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-328000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-328000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-328000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-328000 -n multinode-328000: exit status 7 (35.620709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-943000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-943000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.809015625s)

                                                
                                                
-- stdout --
	* [test-preload-943000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-943000" primary control-plane node in "test-preload-943000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-943000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:55:36.235189    8043 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:55:36.235348    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:36.235355    8043 out.go:358] Setting ErrFile to fd 2...
	I1007 04:55:36.235357    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:55:36.235490    8043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:55:36.236714    8043 out.go:352] Setting JSON to false
	I1007 04:55:36.254302    8043 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5107,"bootTime":1728297029,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:55:36.254379    8043 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:55:36.259945    8043 out.go:177] * [test-preload-943000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:55:36.267948    8043 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:55:36.267976    8043 notify.go:220] Checking for updates...
	I1007 04:55:36.273902    8043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:55:36.276942    8043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:55:36.279846    8043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:55:36.282929    8043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:55:36.285912    8043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:55:36.289285    8043 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:55:36.289343    8043 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:55:36.293901    8043 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:55:36.300810    8043 start.go:297] selected driver: qemu2
	I1007 04:55:36.300815    8043 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:55:36.300820    8043 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:55:36.303249    8043 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:55:36.305881    8043 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:55:36.309057    8043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 04:55:36.309087    8043 cni.go:84] Creating CNI manager for ""
	I1007 04:55:36.309111    8043 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:55:36.309122    8043 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:55:36.309155    8043 start.go:340] cluster config:
	{Name:test-preload-943000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:55:36.313900    8043 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.321900    8043 out.go:177] * Starting "test-preload-943000" primary control-plane node in "test-preload-943000" cluster
	I1007 04:55:36.325762    8043 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1007 04:55:36.325863    8043 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/test-preload-943000/config.json ...
	I1007 04:55:36.325886    8043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/test-preload-943000/config.json: {Name:mkec1864c18f68e127d5f74534ce0c4711a5746b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:55:36.325899    8043 cache.go:107] acquiring lock: {Name:mkf4d7d0e210cfec46646868b33d8ac3b8550a66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.325910    8043 cache.go:107] acquiring lock: {Name:mkf346c595febcec22d6055eb05bf2ac73e8f58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.325958    8043 cache.go:107] acquiring lock: {Name:mk79b55c922f034d8d2daef65bbe56982d148edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.325935    8043 cache.go:107] acquiring lock: {Name:mk1fb7e737e7d31ae86da0892dd0283610520d54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.326058    8043 cache.go:107] acquiring lock: {Name:mkdb9cac01f9f7b6a6f80885a2d8020970b99b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.326061    8043 cache.go:107] acquiring lock: {Name:mk099724b1b35494df7509e531fe48a96d51871d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.326141    8043 cache.go:107] acquiring lock: {Name:mk9f387f4187afd07d5c73cc2d9202c17de6d09c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.326937    8043 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 04:55:36.326925    8043 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 04:55:36.326960    8043 start.go:360] acquireMachinesLock for test-preload-943000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:36.326962    8043 cache.go:107] acquiring lock: {Name:mk6905613853f0f5ccbc458496fd13ac543d3998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:55:36.326978    8043 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 04:55:36.327015    8043 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 04:55:36.327034    8043 start.go:364] duration metric: took 66µs to acquireMachinesLock for "test-preload-943000"
	I1007 04:55:36.327048    8043 start.go:93] Provisioning new machine with config: &{Name:test-preload-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:55:36.326928    8043 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 04:55:36.327075    8043 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:55:36.327089    8043 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:55:36.327204    8043 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:55:36.327267    8043 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 04:55:36.330895    8043 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:55:36.340068    8043 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 04:55:36.340694    8043 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 04:55:36.340745    8043 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 04:55:36.340908    8043 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 04:55:36.343307    8043 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:55:36.343308    8043 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 04:55:36.343407    8043 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 04:55:36.343415    8043 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:55:36.349223    8043 start.go:159] libmachine.API.Create for "test-preload-943000" (driver="qemu2")
	I1007 04:55:36.349243    8043 client.go:168] LocalClient.Create starting
	I1007 04:55:36.349322    8043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:55:36.349360    8043 main.go:141] libmachine: Decoding PEM data...
	I1007 04:55:36.349371    8043 main.go:141] libmachine: Parsing certificate...
	I1007 04:55:36.349419    8043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:55:36.349452    8043 main.go:141] libmachine: Decoding PEM data...
	I1007 04:55:36.349461    8043 main.go:141] libmachine: Parsing certificate...
	I1007 04:55:36.349847    8043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:55:36.496614    8043 main.go:141] libmachine: Creating SSH key...
	I1007 04:55:36.611937    8043 main.go:141] libmachine: Creating Disk image...
	I1007 04:55:36.611956    8043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:55:36.612179    8043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:36.623154    8043 main.go:141] libmachine: STDOUT: 
	I1007 04:55:36.623179    8043 main.go:141] libmachine: STDERR: 
	I1007 04:55:36.623246    8043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2 +20000M
	I1007 04:55:36.632488    8043 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:55:36.632511    8043 main.go:141] libmachine: STDERR: 
	I1007 04:55:36.632544    8043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:36.632548    8043 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:55:36.632609    8043 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:36.632651    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ba:3c:90:78:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:36.634577    8043 main.go:141] libmachine: STDOUT: 
	I1007 04:55:36.634592    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:36.634610    8043 client.go:171] duration metric: took 285.363292ms to LocalClient.Create
	I1007 04:55:36.842375    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1007 04:55:36.848681    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1007 04:55:36.852277    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1007 04:55:36.904952    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W1007 04:55:36.972668    8043 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 04:55:36.972696    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 04:55:37.011166    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 04:55:37.083610    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 04:55:37.142034    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1007 04:55:37.142053    8043 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 815.09425ms
	I1007 04:55:37.142070    8043 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1007 04:55:37.689839    8043 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 04:55:37.689934    8043 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 04:55:38.135194    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 04:55:38.135265    8043 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.809370709s
	I1007 04:55:38.135293    8043 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 04:55:38.592261    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1007 04:55:38.592312    8043 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.266355625s
	I1007 04:55:38.592347    8043 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1007 04:55:38.634851    8043 start.go:128] duration metric: took 2.307767875s to createHost
	I1007 04:55:38.634888    8043 start.go:83] releasing machines lock for "test-preload-943000", held for 2.307851375s
	W1007 04:55:38.634925    8043 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:38.650276    8043 out.go:177] * Deleting "test-preload-943000" in qemu2 ...
	W1007 04:55:38.673670    8043 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:38.673699    8043 start.go:729] Will try again in 5 seconds ...
	I1007 04:55:38.783036    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1007 04:55:38.783094    8043 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.457006083s
	I1007 04:55:38.783143    8043 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1007 04:55:40.632777    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1007 04:55:40.632820    8043 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.30679375s
	I1007 04:55:40.632847    8043 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1007 04:55:41.145299    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1007 04:55:41.145345    8043 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.81945475s
	I1007 04:55:41.145371    8043 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1007 04:55:43.183453    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1007 04:55:43.183531    8043 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.85762875s
	I1007 04:55:43.183562    8043 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1007 04:55:43.675880    8043 start.go:360] acquireMachinesLock for test-preload-943000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:55:43.676327    8043 start.go:364] duration metric: took 385.667µs to acquireMachinesLock for "test-preload-943000"
	I1007 04:55:43.676437    8043 start.go:93] Provisioning new machine with config: &{Name:test-preload-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:55:43.676656    8043 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:55:43.688041    8043 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:55:43.738311    8043 start.go:159] libmachine.API.Create for "test-preload-943000" (driver="qemu2")
	I1007 04:55:43.738361    8043 client.go:168] LocalClient.Create starting
	I1007 04:55:43.738502    8043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:55:43.738578    8043 main.go:141] libmachine: Decoding PEM data...
	I1007 04:55:43.738602    8043 main.go:141] libmachine: Parsing certificate...
	I1007 04:55:43.738684    8043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:55:43.738743    8043 main.go:141] libmachine: Decoding PEM data...
	I1007 04:55:43.738761    8043 main.go:141] libmachine: Parsing certificate...
	I1007 04:55:43.739310    8043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:55:43.892811    8043 main.go:141] libmachine: Creating SSH key...
	I1007 04:55:43.942425    8043 main.go:141] libmachine: Creating Disk image...
	I1007 04:55:43.942431    8043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:55:43.942621    8043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:43.952733    8043 main.go:141] libmachine: STDOUT: 
	I1007 04:55:43.952762    8043 main.go:141] libmachine: STDERR: 
	I1007 04:55:43.952821    8043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2 +20000M
	I1007 04:55:43.961576    8043 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:55:43.961589    8043 main.go:141] libmachine: STDERR: 
	I1007 04:55:43.961603    8043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:43.961606    8043 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:55:43.961615    8043 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:55:43.961661    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:bc:59:9c:fd:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/test-preload-943000/disk.qcow2
	I1007 04:55:43.963518    8043 main.go:141] libmachine: STDOUT: 
	I1007 04:55:43.963531    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:55:43.963544    8043 client.go:171] duration metric: took 225.179209ms to LocalClient.Create
	I1007 04:55:44.430628    8043 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1007 04:55:44.430710    8043 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.10470575s
	I1007 04:55:44.430740    8043 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1007 04:55:44.430810    8043 cache.go:87] Successfully saved all images to host disk.
	I1007 04:55:45.965829    8043 start.go:128] duration metric: took 2.28914375s to createHost
	I1007 04:55:45.965885    8043 start.go:83] releasing machines lock for "test-preload-943000", held for 2.289540875s
	W1007 04:55:45.966259    8043 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:55:45.978968    8043 out.go:201] 
	W1007 04:55:45.984044    8043 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 04:55:45.984071    8043 out.go:270] * 
	* 
	W1007 04:55:45.986751    8043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:55:45.996928    8043 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-943000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-07 04:55:46.014126 -0700 PDT m=+736.228571835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-943000 -n test-preload-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-943000 -n test-preload-943000: exit status 7 (72.268458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-943000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-943000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-943000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (10.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-569000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-569000 --memory=2048 --driver=qemu2 : exit status 80 (9.981373875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-569000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-569000" primary control-plane node in "scheduled-stop-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-569000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-569000" primary control-plane node in "scheduled-stop-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-07 04:55:56.149541 -0700 PDT m=+746.364016710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-569000 -n scheduled-stop-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-569000 -n scheduled-stop-569000: exit status 7 (73.134625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-569000
--- FAIL: TestScheduledStopUnix (10.14s)

                                                
                                    
x
+
TestSkaffold (16.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3735936564 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3735936564 version: (1.062616708s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-906000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-906000 --memory=2600 --driver=qemu2 : exit status 80 (9.921639s)

                                                
                                                
-- stdout --
	* [skaffold-906000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-906000" primary control-plane node in "skaffold-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-906000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-906000" primary control-plane node in "skaffold-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-07 04:56:12.703498 -0700 PDT m=+762.918022626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-906000 -n skaffold-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-906000 -n skaffold-906000: exit status 7 (68.724875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-906000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-906000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-906000
--- FAIL: TestSkaffold (16.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (614.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3721317920 start -p running-upgrade-802000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3721317920 start -p running-upgrade-802000 --memory=2200 --vm-driver=qemu2 : (1m13.153901542s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-802000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-802000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.064398625s)

                                                
                                                
-- stdout --
	* [running-upgrade-802000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-802000" primary control-plane node in "running-upgrade-802000" cluster
	* Updating the running qemu2 "running-upgrade-802000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:58:11.430118    8424 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:58:11.430301    8424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:58:11.430304    8424 out.go:358] Setting ErrFile to fd 2...
	I1007 04:58:11.430306    8424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:58:11.430432    8424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:58:11.431528    8424 out.go:352] Setting JSON to false
	I1007 04:58:11.449648    8424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5262,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:58:11.449734    8424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:58:11.454073    8424 out.go:177] * [running-upgrade-802000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:58:11.460916    8424 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:58:11.460955    8424 notify.go:220] Checking for updates...
	I1007 04:58:11.468052    8424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:58:11.470982    8424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:58:11.473987    8424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:58:11.476994    8424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:58:11.478188    8424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:58:11.481197    8424 config.go:182] Loaded profile config "running-upgrade-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 04:58:11.483946    8424 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 04:58:11.487027    8424 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:58:11.491002    8424 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:58:11.498022    8424 start.go:297] selected driver: qemu2
	I1007 04:58:11.498027    8424 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 04:58:11.498074    8424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:58:11.500429    8424 cni.go:84] Creating CNI manager for ""
	I1007 04:58:11.500462    8424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:58:11.500498    8424 start.go:340] cluster config:
	{Name:running-upgrade-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 04:58:11.500549    8424 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:58:11.509101    8424 out.go:177] * Starting "running-upgrade-802000" primary control-plane node in "running-upgrade-802000" cluster
	I1007 04:58:11.512936    8424 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 04:58:11.512952    8424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 04:58:11.512960    8424 cache.go:56] Caching tarball of preloaded images
	I1007 04:58:11.513045    8424 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:58:11.513051    8424 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 04:58:11.513100    8424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/config.json ...
	I1007 04:58:11.513450    8424 start.go:360] acquireMachinesLock for running-upgrade-802000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:58:11.513479    8424 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "running-upgrade-802000"
	I1007 04:58:11.513488    8424 start.go:96] Skipping create...Using existing machine configuration
	I1007 04:58:11.513492    8424 fix.go:54] fixHost starting: 
	I1007 04:58:11.514152    8424 fix.go:112] recreateIfNeeded on running-upgrade-802000: state=Running err=<nil>
	W1007 04:58:11.514160    8424 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 04:58:11.517999    8424 out.go:177] * Updating the running qemu2 "running-upgrade-802000" VM ...
	I1007 04:58:11.525959    8424 machine.go:93] provisionDockerMachine start ...
	I1007 04:58:11.526003    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:11.526137    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:11.526142    8424 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 04:58:11.584221    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-802000
	
	I1007 04:58:11.584237    8424 buildroot.go:166] provisioning hostname "running-upgrade-802000"
	I1007 04:58:11.584316    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:11.584439    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:11.584445    8424 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-802000 && echo "running-upgrade-802000" | sudo tee /etc/hostname
	I1007 04:58:11.645816    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-802000
	
	I1007 04:58:11.645873    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:11.645975    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:11.645984    8424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-802000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-802000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-802000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 04:58:11.707522    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 04:58:11.707533    8424 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19763-6232/.minikube CaCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19763-6232/.minikube}
	I1007 04:58:11.707540    8424 buildroot.go:174] setting up certificates
	I1007 04:58:11.707552    8424 provision.go:84] configureAuth start
	I1007 04:58:11.707556    8424 provision.go:143] copyHostCerts
	I1007 04:58:11.707619    8424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem, removing ...
	I1007 04:58:11.707623    8424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem
	I1007 04:58:11.707741    8424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem (1082 bytes)
	I1007 04:58:11.707935    8424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem, removing ...
	I1007 04:58:11.707939    8424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem
	I1007 04:58:11.707980    8424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem (1123 bytes)
	I1007 04:58:11.708108    8424 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem, removing ...
	I1007 04:58:11.708111    8424 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem
	I1007 04:58:11.708148    8424 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem (1679 bytes)
	I1007 04:58:11.708245    8424 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-802000 san=[127.0.0.1 localhost minikube running-upgrade-802000]
	I1007 04:58:12.072955    8424 provision.go:177] copyRemoteCerts
	I1007 04:58:12.073024    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 04:58:12.073035    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 04:58:12.104515    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 04:58:12.111696    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 04:58:12.119263    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 04:58:12.127065    8424 provision.go:87] duration metric: took 419.502375ms to configureAuth
	I1007 04:58:12.127074    8424 buildroot.go:189] setting minikube options for container-runtime
	I1007 04:58:12.127180    8424 config.go:182] Loaded profile config "running-upgrade-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 04:58:12.127223    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:12.127308    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:12.127313    8424 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 04:58:12.184441    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 04:58:12.184452    8424 buildroot.go:70] root file system type: tmpfs
	I1007 04:58:12.184504    8424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 04:58:12.184567    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:12.184674    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:12.184716    8424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 04:58:12.245809    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 04:58:12.245875    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:12.245978    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:12.245990    8424 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 04:58:12.304342    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 04:58:12.304351    8424 machine.go:96] duration metric: took 778.38725ms to provisionDockerMachine
	I1007 04:58:12.304356    8424 start.go:293] postStartSetup for "running-upgrade-802000" (driver="qemu2")
	I1007 04:58:12.304362    8424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 04:58:12.304418    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 04:58:12.304427    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 04:58:12.335875    8424 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 04:58:12.337449    8424 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 04:58:12.337456    8424 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/addons for local assets ...
	I1007 04:58:12.337514    8424 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/files for local assets ...
	I1007 04:58:12.337610    8424 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem -> 67502.pem in /etc/ssl/certs
	I1007 04:58:12.337711    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 04:58:12.340444    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /etc/ssl/certs/67502.pem (1708 bytes)
	I1007 04:58:12.347622    8424 start.go:296] duration metric: took 43.262167ms for postStartSetup
	I1007 04:58:12.347637    8424 fix.go:56] duration metric: took 834.148375ms for fixHost
	I1007 04:58:12.347690    8424 main.go:141] libmachine: Using SSH client type: native
	I1007 04:58:12.347794    8424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009061f0] 0x100908a30 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I1007 04:58:12.347799    8424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 04:58:12.409499    8424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302292.804041263
	
	I1007 04:58:12.409508    8424 fix.go:216] guest clock: 1728302292.804041263
	I1007 04:58:12.409512    8424 fix.go:229] Guest: 2024-10-07 04:58:12.804041263 -0700 PDT Remote: 2024-10-07 04:58:12.347638 -0700 PDT m=+0.940267668 (delta=456.403263ms)
	I1007 04:58:12.409523    8424 fix.go:200] guest clock delta is within tolerance: 456.403263ms
	I1007 04:58:12.409525    8424 start.go:83] releasing machines lock for "running-upgrade-802000", held for 896.044ms
	I1007 04:58:12.409598    8424 ssh_runner.go:195] Run: cat /version.json
	I1007 04:58:12.409607    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 04:58:12.409598    8424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 04:58:12.409634    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	W1007 04:58:12.410136    8424 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51363->127.0.0.1:51231: write: broken pipe
	I1007 04:58:12.410152    8424 retry.go:31] will retry after 315.345872ms: ssh: handshake failed: write tcp 127.0.0.1:51363->127.0.0.1:51231: write: broken pipe
	W1007 04:58:12.779408    8424 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 04:58:12.779547    8424 ssh_runner.go:195] Run: systemctl --version
	I1007 04:58:12.784774    8424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 04:58:12.788184    8424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 04:58:12.788267    8424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 04:58:12.794448    8424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 04:58:12.802684    8424 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 04:58:12.802698    8424 start.go:495] detecting cgroup driver to use...
	I1007 04:58:12.802900    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 04:58:12.811494    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 04:58:12.815823    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 04:58:12.819615    8424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 04:58:12.819655    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 04:58:12.823377    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 04:58:12.826974    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 04:58:12.830587    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 04:58:12.833829    8424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 04:58:12.836678    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 04:58:12.839748    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 04:58:12.843278    8424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 04:58:12.846434    8424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 04:58:12.848924    8424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 04:58:12.851968    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:12.945998    8424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 04:58:12.952451    8424 start.go:495] detecting cgroup driver to use...
	I1007 04:58:12.952536    8424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 04:58:12.960906    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 04:58:12.966486    8424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 04:58:12.972708    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 04:58:12.977710    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 04:58:12.985740    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 04:58:12.991197    8424 ssh_runner.go:195] Run: which cri-dockerd
	I1007 04:58:12.992678    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 04:58:12.995674    8424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 04:58:13.000809    8424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 04:58:13.095726    8424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 04:58:13.193443    8424 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 04:58:13.193511    8424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 04:58:13.199589    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:13.289027    8424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 04:58:16.559148    8424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.270107583s)
	I1007 04:58:16.559231    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 04:58:16.564215    8424 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1007 04:58:16.570589    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 04:58:16.575134    8424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 04:58:16.658854    8424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 04:58:16.744269    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:16.827021    8424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 04:58:16.833529    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 04:58:16.837898    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:16.914865    8424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 04:58:16.954776    8424 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 04:58:16.954883    8424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 04:58:16.956813    8424 start.go:563] Will wait 60s for crictl version
	I1007 04:58:16.956873    8424 ssh_runner.go:195] Run: which crictl
	I1007 04:58:16.958159    8424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 04:58:16.969800    8424 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 04:58:16.969886    8424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 04:58:16.983343    8424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 04:58:17.002807    8424 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 04:58:17.002956    8424 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 04:58:17.004279    8424 kubeadm.go:883] updating cluster {Name:running-upgrade-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 04:58:17.004328    8424 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 04:58:17.004375    8424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 04:58:17.015018    8424 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 04:58:17.015030    8424 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 04:58:17.015093    8424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 04:58:17.018504    8424 ssh_runner.go:195] Run: which lz4
	I1007 04:58:17.019657    8424 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 04:58:17.020820    8424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 04:58:17.020829    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 04:58:17.962409    8424 docker.go:649] duration metric: took 942.79775ms to copy over tarball
	I1007 04:58:17.962483    8424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 04:58:19.129361    8424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166866625s)
	I1007 04:58:19.129375    8424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 04:58:19.144897    8424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 04:58:19.148129    8424 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 04:58:19.153421    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:19.235787    8424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 04:58:20.436652    8424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.200851667s)
	I1007 04:58:20.436745    8424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 04:58:20.453582    8424 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 04:58:20.453596    8424 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 04:58:20.453615    8424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 04:58:20.459378    8424 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:58:20.461365    8424 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 04:58:20.463083    8424 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 04:58:20.463438    8424 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:58:20.465079    8424 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 04:58:20.465113    8424 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 04:58:20.466335    8424 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 04:58:20.466587    8424 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 04:58:20.467860    8424 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 04:58:20.468115    8424 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 04:58:20.469226    8424 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 04:58:20.469271    8424 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 04:58:20.470313    8424 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 04:58:20.470666    8424 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:58:20.471993    8424 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 04:58:20.472586    8424 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:58:20.992053    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 04:58:20.992208    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 04:58:21.009888    8424 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 04:58:21.009905    8424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 04:58:21.009921    8424 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 04:58:21.009924    8424 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 04:58:21.009985    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 04:58:21.009985    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 04:58:21.020886    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 04:58:21.021006    8424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 04:58:21.021642    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 04:58:21.022759    8424 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 04:58:21.022781    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1007 04:58:21.031792    8424 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 04:58:21.031806    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1007 04:58:21.041499    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 04:58:21.065858    8424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1007 04:58:21.065902    8424 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 04:58:21.065925    8424 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 04:58:21.066006    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 04:58:21.077488    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 04:58:21.077620    8424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 04:58:21.079385    8424 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 04:58:21.079400    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 04:58:21.089465    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 04:58:21.107123    8424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 04:58:21.107145    8424 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 04:58:21.107211    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 04:58:21.114406    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 04:58:21.138844    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 04:58:21.148828    8424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 04:58:21.148853    8424 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 04:58:21.148912    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 04:58:21.194622    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1007 04:58:21.209445    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 04:58:21.238390    8424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 04:58:21.238419    8424 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 04:58:21.238482    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W1007 04:58:21.257763    8424 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 04:58:21.257927    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:58:21.271706    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 04:58:21.294379    8424 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 04:58:21.294402    8424 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:58:21.294472    8424 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 04:58:21.320119    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 04:58:21.320269    8424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	W1007 04:58:21.325654    8424 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 04:58:21.325767    8424 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:58:21.333697    8424 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 04:58:21.333723    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 04:58:21.355085    8424 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 04:58:21.355110    8424 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:58:21.355173    8424 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 04:58:21.376637    8424 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 04:58:21.376664    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 04:58:21.401038    8424 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 04:58:21.401190    8424 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 04:58:21.543959    8424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 04:58:21.543991    8424 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 04:58:21.543996    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 04:58:21.543973    8424 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 04:58:21.544022    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 04:58:21.600977    8424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 04:58:21.601002    8424 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 04:58:21.601008    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 04:58:21.832091    8424 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 04:58:21.832136    8424 cache_images.go:92] duration metric: took 1.378513584s to LoadCachedImages
	W1007 04:58:21.832184    8424 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1007 04:58:21.832192    8424 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 04:58:21.832245    8424 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-802000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 04:58:21.832318    8424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 04:58:21.846114    8424 cni.go:84] Creating CNI manager for ""
	I1007 04:58:21.846128    8424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:58:21.846150    8424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 04:58:21.846162    8424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-802000 NodeName:running-upgrade-802000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 04:58:21.846224    8424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-802000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 04:58:21.846310    8424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 04:58:21.849240    8424 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 04:58:21.849284    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 04:58:21.851887    8424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 04:58:21.856957    8424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 04:58:21.861915    8424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 04:58:21.867195    8424 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 04:58:21.868528    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 04:58:21.946628    8424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 04:58:21.951703    8424 certs.go:68] Setting up /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000 for IP: 10.0.2.15
	I1007 04:58:21.951709    8424 certs.go:194] generating shared ca certs ...
	I1007 04:58:21.951717    8424 certs.go:226] acquiring lock for ca certs: {Name:mk64252dad53b4f3a87f635894b143f083e9f2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:58:21.951965    8424 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key
	I1007 04:58:21.952001    8424 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key
	I1007 04:58:21.952017    8424 certs.go:256] generating profile certs ...
	I1007 04:58:21.952074    8424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.key
	I1007 04:58:21.952086    8424 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key.97a1ea83
	I1007 04:58:21.952096    8424 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt.97a1ea83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 04:58:22.127934    8424 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt.97a1ea83 ...
	I1007 04:58:22.127958    8424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt.97a1ea83: {Name:mk2ddc07694c426e0fbec9e93d5d473877c2629a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:58:22.128330    8424 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key.97a1ea83 ...
	I1007 04:58:22.128339    8424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key.97a1ea83: {Name:mk95e0dae4121f4657aae42d5816bade274765c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:58:22.128513    8424 certs.go:381] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt.97a1ea83 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt
	I1007 04:58:22.128641    8424 certs.go:385] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key.97a1ea83 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key
	I1007 04:58:22.128783    8424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/proxy-client.key
	I1007 04:58:22.128933    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem (1338 bytes)
	W1007 04:58:22.128959    8424 certs.go:480] ignoring /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750_empty.pem, impossibly tiny 0 bytes
	I1007 04:58:22.128965    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 04:58:22.128989    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem (1082 bytes)
	I1007 04:58:22.129009    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem (1123 bytes)
	I1007 04:58:22.129026    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem (1679 bytes)
	I1007 04:58:22.129064    8424 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem (1708 bytes)
	I1007 04:58:22.129511    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 04:58:22.137256    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 04:58:22.144255    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 04:58:22.151629    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 04:58:22.159434    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 04:58:22.166796    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 04:58:22.174035    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 04:58:22.181172    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 04:58:22.187972    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 04:58:22.195307    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem --> /usr/share/ca-certificates/6750.pem (1338 bytes)
	I1007 04:58:22.202839    8424 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /usr/share/ca-certificates/67502.pem (1708 bytes)
	I1007 04:58:22.210288    8424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 04:58:22.215352    8424 ssh_runner.go:195] Run: openssl version
	I1007 04:58:22.217262    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 04:58:22.220261    8424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 04:58:22.221859    8424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1007 04:58:22.221891    8424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 04:58:22.223746    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 04:58:22.227014    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6750.pem && ln -fs /usr/share/ca-certificates/6750.pem /etc/ssl/certs/6750.pem"
	I1007 04:58:22.230598    8424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6750.pem
	I1007 04:58:22.232138    8424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:45 /usr/share/ca-certificates/6750.pem
	I1007 04:58:22.232166    8424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6750.pem
	I1007 04:58:22.234196    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6750.pem /etc/ssl/certs/51391683.0"
	I1007 04:58:22.236854    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67502.pem && ln -fs /usr/share/ca-certificates/67502.pem /etc/ssl/certs/67502.pem"
	I1007 04:58:22.240048    8424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67502.pem
	I1007 04:58:22.241801    8424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:45 /usr/share/ca-certificates/67502.pem
	I1007 04:58:22.241829    8424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67502.pem
	I1007 04:58:22.243758    8424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67502.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 04:58:22.247784    8424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 04:58:22.249495    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 04:58:22.251360    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 04:58:22.253271    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 04:58:22.255146    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 04:58:22.257246    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 04:58:22.259135    8424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 04:58:22.261130    8424 kubeadm.go:392] StartCluster: {Name:running-upgrade-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 04:58:22.261210    8424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 04:58:22.271331    8424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 04:58:22.275208    8424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 04:58:22.275219    8424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 04:58:22.275253    8424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 04:58:22.278439    8424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 04:58:22.278476    8424 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-802000" does not appear in /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:58:22.278490    8424 kubeconfig.go:62] /Users/jenkins/minikube-integration/19763-6232/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-802000" cluster setting kubeconfig missing "running-upgrade-802000" context setting]
	I1007 04:58:22.278663    8424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:58:22.279603    8424 kapi.go:59] client config for running-upgrade-802000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10235bae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 04:58:22.280565    8424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 04:58:22.284396    8424 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-802000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 04:58:22.284402    8424 kubeadm.go:1160] stopping kube-system containers ...
	I1007 04:58:22.284450    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 04:58:22.299694    8424 docker.go:483] Stopping containers: [a2be1c0bfb94 3ce0018839d7 8233232c8533 f273a77d0afd a615feede37b 1dff5d275bc2 b1a6238bb990 c93013573c23 5ced4d1372d9 c63f6d43a7c8 501ae93920a8 8ebe23d22485 5aef19d82381 c070d20ebe86]
	I1007 04:58:22.299767    8424 ssh_runner.go:195] Run: docker stop a2be1c0bfb94 3ce0018839d7 8233232c8533 f273a77d0afd a615feede37b 1dff5d275bc2 b1a6238bb990 c93013573c23 5ced4d1372d9 c63f6d43a7c8 501ae93920a8 8ebe23d22485 5aef19d82381 c070d20ebe86
	I1007 04:58:22.314643    8424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 04:58:22.397002    8424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 04:58:22.400847    8424 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  7 11:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  7 11:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  7 11:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct  7 11:58 /etc/kubernetes/scheduler.conf
	
	I1007 04:58:22.400891    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I1007 04:58:22.404099    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 04:58:22.404135    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 04:58:22.407874    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I1007 04:58:22.411377    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 04:58:22.411412    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 04:58:22.414613    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I1007 04:58:22.417284    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 04:58:22.417316    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 04:58:22.419885    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I1007 04:58:22.422747    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 04:58:22.422777    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 04:58:22.425338    8424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 04:58:22.428121    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 04:58:22.465243    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 04:58:22.896328    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 04:58:23.107665    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 04:58:23.139810    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 04:58:23.163451    8424 api_server.go:52] waiting for apiserver process to appear ...
	I1007 04:58:23.163552    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 04:58:23.665777    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 04:58:24.165615    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 04:58:24.665594    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 04:58:24.669913    8424 api_server.go:72] duration metric: took 1.506470167s to wait for apiserver process to appear ...
	I1007 04:58:24.669940    8424 api_server.go:88] waiting for apiserver healthz status ...
	I1007 04:58:24.669951    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:29.672034    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:29.672085    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:34.672419    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:34.672498    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:39.673154    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:39.673260    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:44.674431    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:44.674481    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:49.675593    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:49.675694    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:54.677522    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:54.677629    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:58:59.680009    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:58:59.680094    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:04.682748    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:04.682846    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:09.685568    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:09.685663    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:14.686591    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:14.686683    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:19.689348    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:19.689396    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:24.691769    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:24.692089    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 04:59:24.716693    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 04:59:24.716812    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 04:59:24.732142    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 04:59:24.732237    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 04:59:24.745497    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 04:59:24.745582    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 04:59:24.757640    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 04:59:24.757725    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 04:59:24.767553    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 04:59:24.767645    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 04:59:24.777846    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 04:59:24.777923    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 04:59:24.793775    8424 logs.go:282] 0 containers: []
	W1007 04:59:24.793786    8424 logs.go:284] No container was found matching "kindnet"
	I1007 04:59:24.793857    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 04:59:24.804114    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 04:59:24.804133    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 04:59:24.804140    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 04:59:24.818850    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 04:59:24.818863    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 04:59:24.836013    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 04:59:24.836023    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 04:59:24.847692    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 04:59:24.847704    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 04:59:24.852848    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 04:59:24.852856    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 04:59:24.869514    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 04:59:24.869525    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 04:59:24.883043    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 04:59:24.883060    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 04:59:24.908728    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 04:59:24.908739    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 04:59:24.920653    8424 logs.go:123] Gathering logs for Docker ...
	I1007 04:59:24.920663    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 04:59:24.946340    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 04:59:24.946349    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 04:59:24.969448    8424 logs.go:123] Gathering logs for container status ...
	I1007 04:59:24.969459    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 04:59:24.981127    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 04:59:24.981139    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 04:59:25.019967    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 04:59:25.019978    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 04:59:25.035306    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 04:59:25.035316    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 04:59:25.047088    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 04:59:25.047102    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 04:59:25.120163    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 04:59:25.120176    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 04:59:25.139916    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 04:59:25.139929    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 04:59:27.653801    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:32.656217    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:32.656754    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 04:59:32.693412    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 04:59:32.693560    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 04:59:32.712155    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 04:59:32.712282    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 04:59:32.727322    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 04:59:32.727406    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 04:59:32.739612    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 04:59:32.739694    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 04:59:32.750918    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 04:59:32.750997    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 04:59:32.761970    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 04:59:32.762046    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 04:59:32.771989    8424 logs.go:282] 0 containers: []
	W1007 04:59:32.771999    8424 logs.go:284] No container was found matching "kindnet"
	I1007 04:59:32.772061    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 04:59:32.782954    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 04:59:32.782978    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 04:59:32.782990    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 04:59:32.797081    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 04:59:32.797092    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 04:59:32.808671    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 04:59:32.808682    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 04:59:32.827780    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 04:59:32.827791    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 04:59:32.839454    8424 logs.go:123] Gathering logs for Docker ...
	I1007 04:59:32.839468    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 04:59:32.865184    8424 logs.go:123] Gathering logs for container status ...
	I1007 04:59:32.865192    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 04:59:32.876166    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 04:59:32.876174    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 04:59:32.880540    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 04:59:32.880546    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 04:59:32.895529    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 04:59:32.895539    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 04:59:32.909922    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 04:59:32.909931    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 04:59:32.946469    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 04:59:32.946476    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 04:59:32.958458    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 04:59:32.958469    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 04:59:32.972552    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 04:59:32.972563    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 04:59:33.007967    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 04:59:33.007979    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 04:59:33.022062    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 04:59:33.022071    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 04:59:33.033700    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 04:59:33.033710    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 04:59:33.044974    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 04:59:33.044983    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 04:59:35.575152    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:40.577546    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:40.577799    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 04:59:40.606310    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 04:59:40.606455    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 04:59:40.624338    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 04:59:40.624440    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 04:59:40.638265    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 04:59:40.638337    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 04:59:40.649870    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 04:59:40.649947    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 04:59:40.660188    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 04:59:40.660268    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 04:59:40.670523    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 04:59:40.670601    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 04:59:40.680527    8424 logs.go:282] 0 containers: []
	W1007 04:59:40.680540    8424 logs.go:284] No container was found matching "kindnet"
	I1007 04:59:40.680608    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 04:59:40.690774    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 04:59:40.690790    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 04:59:40.690795    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 04:59:40.703100    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 04:59:40.703113    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 04:59:40.714815    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 04:59:40.714824    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 04:59:40.726553    8424 logs.go:123] Gathering logs for Docker ...
	I1007 04:59:40.726569    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 04:59:40.751571    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 04:59:40.751582    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 04:59:40.789027    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 04:59:40.789037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 04:59:40.802886    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 04:59:40.802898    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 04:59:40.817480    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 04:59:40.817490    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 04:59:40.841865    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 04:59:40.841874    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 04:59:40.853048    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 04:59:40.853060    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 04:59:40.866331    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 04:59:40.866340    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 04:59:40.883705    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 04:59:40.883713    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 04:59:40.895013    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 04:59:40.895029    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 04:59:40.909245    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 04:59:40.909256    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 04:59:40.930292    8424 logs.go:123] Gathering logs for container status ...
	I1007 04:59:40.930302    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 04:59:40.942397    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 04:59:40.942406    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 04:59:40.946900    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 04:59:40.946908    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 04:59:43.484028    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:48.486806    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:48.486915    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 04:59:48.498960    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 04:59:48.499038    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 04:59:48.509323    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 04:59:48.509396    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 04:59:48.519545    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 04:59:48.519620    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 04:59:48.529863    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 04:59:48.529941    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 04:59:48.540522    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 04:59:48.540599    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 04:59:48.567859    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 04:59:48.567938    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 04:59:48.578239    8424 logs.go:282] 0 containers: []
	W1007 04:59:48.578249    8424 logs.go:284] No container was found matching "kindnet"
	I1007 04:59:48.578307    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 04:59:48.592423    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 04:59:48.592441    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 04:59:48.592446    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 04:59:48.606230    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 04:59:48.606242    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 04:59:48.619486    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 04:59:48.619496    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 04:59:48.633608    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 04:59:48.633618    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 04:59:48.645830    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 04:59:48.645844    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 04:59:48.664474    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 04:59:48.664486    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 04:59:48.700713    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 04:59:48.700732    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 04:59:48.734846    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 04:59:48.734858    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 04:59:48.762021    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 04:59:48.762040    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 04:59:48.773568    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 04:59:48.773581    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 04:59:48.785220    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 04:59:48.785233    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 04:59:48.789641    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 04:59:48.789650    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 04:59:48.800518    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 04:59:48.800534    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 04:59:48.812292    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 04:59:48.812305    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 04:59:48.826416    8424 logs.go:123] Gathering logs for Docker ...
	I1007 04:59:48.826430    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 04:59:48.850697    8424 logs.go:123] Gathering logs for container status ...
	I1007 04:59:48.850705    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 04:59:48.863161    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 04:59:48.863171    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 04:59:51.379606    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 04:59:56.382477    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 04:59:56.383033    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 04:59:56.423586    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 04:59:56.423736    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 04:59:56.445574    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 04:59:56.445690    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 04:59:56.460796    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 04:59:56.460881    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 04:59:56.473181    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 04:59:56.473258    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 04:59:56.484410    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 04:59:56.484486    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 04:59:56.495289    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 04:59:56.495372    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 04:59:56.505802    8424 logs.go:282] 0 containers: []
	W1007 04:59:56.505814    8424 logs.go:284] No container was found matching "kindnet"
	I1007 04:59:56.505882    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 04:59:56.516595    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 04:59:56.516616    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 04:59:56.516621    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 04:59:56.551408    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 04:59:56.551417    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 04:59:56.566348    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 04:59:56.566358    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 04:59:56.581626    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 04:59:56.581637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 04:59:56.596386    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 04:59:56.596395    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 04:59:56.634359    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 04:59:56.634369    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 04:59:56.638823    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 04:59:56.638832    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 04:59:56.651115    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 04:59:56.651126    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 04:59:56.668803    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 04:59:56.668813    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 04:59:56.680080    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 04:59:56.680091    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 04:59:56.694806    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 04:59:56.694817    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 04:59:56.707267    8424 logs.go:123] Gathering logs for container status ...
	I1007 04:59:56.707278    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 04:59:56.718831    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 04:59:56.718842    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 04:59:56.733172    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 04:59:56.733187    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 04:59:56.756758    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 04:59:56.756775    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 04:59:56.768649    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 04:59:56.768657    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 04:59:56.780408    8424 logs.go:123] Gathering logs for Docker ...
	I1007 04:59:56.780422    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 04:59:59.306492    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:04.309414    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:04.309905    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:04.351128    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:04.351270    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:04.373892    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:04.374015    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:04.388266    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:04.388354    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:04.400840    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:04.400913    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:04.412206    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:04.412273    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:04.423408    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:04.423476    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:04.434864    8424 logs.go:282] 0 containers: []
	W1007 05:00:04.434881    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:04.434942    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:04.446267    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:04.446289    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:04.446294    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:04.450844    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:04.450851    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:04.473775    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:04.473784    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:04.488661    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:04.488672    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:04.499960    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:04.499971    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:04.538024    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:04.538036    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:04.572567    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:04.572581    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:04.586273    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:04.586284    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:04.600188    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:04.600197    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:04.623759    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:04.623767    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:04.637376    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:04.637385    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:04.652126    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:04.652135    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:04.663705    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:04.663715    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:04.675134    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:04.675143    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:04.699572    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:04.699583    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:04.710992    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:04.711003    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:04.722000    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:04.722013    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:07.236282    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:12.238851    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:12.239019    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:12.251741    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:12.251824    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:12.264798    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:12.264875    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:12.275423    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:12.275494    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:12.286295    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:12.286366    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:12.296977    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:12.297057    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:12.307265    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:12.307341    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:12.320212    8424 logs.go:282] 0 containers: []
	W1007 05:00:12.320224    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:12.320294    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:12.344887    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:12.344909    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:12.344914    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:12.369603    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:12.369616    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:12.383916    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:12.383927    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:12.401469    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:12.401480    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:12.414048    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:12.414065    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:12.426114    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:12.426129    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:12.440285    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:12.440294    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:12.464081    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:12.464091    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:12.499824    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:12.499833    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:12.503896    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:12.503902    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:12.517497    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:12.517506    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:12.531677    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:12.531688    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:12.548063    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:12.548077    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:12.559623    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:12.559632    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:12.594511    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:12.594520    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:12.607715    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:12.607726    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:12.622765    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:12.622774    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:15.136332    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:20.139124    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:20.139614    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:20.176703    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:20.176853    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:20.200157    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:20.200258    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:20.214526    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:20.214597    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:20.226011    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:20.226090    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:20.236740    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:20.236813    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:20.247497    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:20.247565    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:20.261662    8424 logs.go:282] 0 containers: []
	W1007 05:00:20.261675    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:20.261746    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:20.272685    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:20.272702    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:20.272707    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:20.287447    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:20.287456    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:20.302857    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:20.302866    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:20.328750    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:20.328759    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:20.350132    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:20.350143    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:20.361820    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:20.361833    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:20.400138    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:20.400147    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:20.404348    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:20.404355    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:20.419911    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:20.419923    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:20.431250    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:20.431261    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:20.442710    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:20.442719    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:20.466139    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:20.466153    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:20.480761    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:20.480772    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:20.499937    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:20.499951    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:20.511106    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:20.511115    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:20.551782    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:20.551793    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:20.565786    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:20.565799    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:23.080136    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:28.082598    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:28.082745    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:28.095038    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:28.095118    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:28.107055    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:28.107154    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:28.118019    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:28.118089    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:28.129703    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:28.129781    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:28.140731    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:28.140803    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:28.151702    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:28.151774    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:28.162218    8424 logs.go:282] 0 containers: []
	W1007 05:00:28.162235    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:28.162297    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:28.174753    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:28.174772    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:28.174777    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:28.188977    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:28.188993    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:28.207844    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:28.207854    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:28.232259    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:28.232269    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:28.266818    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:28.266829    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:28.281534    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:28.281546    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:28.296250    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:28.296264    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:28.311946    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:28.311957    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:28.316666    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:28.316673    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:28.330871    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:28.330882    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:28.346187    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:28.346196    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:28.372725    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:28.372735    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:28.385254    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:28.385268    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:28.421943    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:28.421950    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:28.433689    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:28.433700    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:28.467309    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:28.467319    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:28.485606    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:28.485621    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:30.999591    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:36.001944    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:36.002213    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:36.023840    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:36.023955    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:36.038799    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:36.038882    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:36.051682    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:36.051755    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:36.062112    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:36.062204    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:36.072443    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:36.072526    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:36.082943    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:36.083025    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:36.093561    8424 logs.go:282] 0 containers: []
	W1007 05:00:36.093574    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:36.093650    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:36.104656    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:36.104677    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:36.104683    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:36.118130    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:36.118140    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:36.129974    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:36.129989    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:36.143945    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:36.143960    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:36.155964    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:36.155974    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:36.193979    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:36.193992    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:36.205683    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:36.205697    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:36.219910    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:36.219923    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:36.231553    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:36.231563    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:36.251466    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:36.251475    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:36.263035    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:36.263044    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:36.297353    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:36.297366    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:36.311364    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:36.311373    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:36.329368    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:36.329378    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:36.341333    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:36.341346    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:36.364753    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:36.364759    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:36.368686    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:36.368692    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:38.893363    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:43.895577    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:43.895768    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:43.911760    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:43.911854    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:43.924833    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:43.924916    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:43.937109    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:43.937190    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:43.947558    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:43.947635    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:43.957813    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:43.957896    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:43.968538    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:43.968619    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:43.978846    8424 logs.go:282] 0 containers: []
	W1007 05:00:43.978860    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:43.978925    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:43.990163    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:43.990185    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:43.990190    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:44.004623    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:44.004632    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:44.016639    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:44.016656    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:44.021031    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:44.021037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:44.037393    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:44.037403    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:44.051160    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:44.051170    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:44.065768    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:44.065776    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:44.083491    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:44.083504    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:44.094694    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:44.094704    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:44.120386    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:44.120396    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:44.134346    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:44.134356    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:44.149992    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:44.150001    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:44.161426    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:44.161439    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:44.172740    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:44.172753    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:44.189941    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:44.189954    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:44.227040    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:44.227051    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:44.261885    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:44.261896    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:46.789206    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:51.791485    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:51.791618    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:51.804384    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:51.804469    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:51.817485    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:51.817568    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:51.829030    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:51.829120    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:51.841697    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:51.841787    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:51.854522    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:51.854601    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:51.870229    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:51.870312    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:51.881878    8424 logs.go:282] 0 containers: []
	W1007 05:00:51.881889    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:51.881960    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:51.898011    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:51.898032    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:51.898038    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:51.923554    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:00:51.923569    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:00:51.937184    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:00:51.937197    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:00:51.957680    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:00:51.957695    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:00:51.972830    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:00:51.972847    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:00:51.991516    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:51.991530    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:52.003713    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:52.003726    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:52.016589    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:52.016601    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:52.031995    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:00:52.032006    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:00:52.058059    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:52.058074    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:00:52.073184    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:52.073197    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:52.085104    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:52.085116    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:52.097525    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:52.097538    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:52.102421    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:52.102431    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:52.146182    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:00:52.146195    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:00:52.165574    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:52.165588    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:52.207597    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:00:52.207612    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:00:54.722362    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:00:59.724597    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:59.724877    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:59.744932    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:59.745041    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:59.759269    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:59.759349    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:59.771972    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:59.772052    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:59.782857    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:59.782940    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:59.793338    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:59.793408    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:59.803567    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:59.803630    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:59.813579    8424 logs.go:282] 0 containers: []
	W1007 05:00:59.813591    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:59.813661    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:59.825893    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:59.825913    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:59.825918    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:59.864512    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:59.864521    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:59.868684    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:59.868692    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:59.904368    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:59.904378    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:59.915868    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:59.915884    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:59.927361    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:59.927373    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:59.939296    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:59.939307    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:59.951309    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:59.951321    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:59.966386    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:59.966396    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:59.991738    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:59.991744    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:00.008693    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:00.008707    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:00.020512    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:00.020522    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:00.034505    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:00.034520    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:00.052372    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:00.052383    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:00.064669    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:00.064680    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:00.078289    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:00.078299    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:00.103461    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:00.103471    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:02.619734    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:07.622427    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:07.622606    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:07.634221    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:07.634297    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:07.648586    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:07.648675    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:07.659878    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:07.659953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:07.670831    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:07.670910    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:07.685514    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:07.685607    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:07.696072    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:07.696147    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:07.706172    8424 logs.go:282] 0 containers: []
	W1007 05:01:07.706183    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:07.706248    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:07.717205    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:07.717225    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:07.717230    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:07.754909    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:07.754919    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:07.759638    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:07.759645    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:07.800858    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:07.800870    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:07.819049    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:07.819058    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:07.833753    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:07.833764    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:07.848457    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:07.848468    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:07.866400    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:07.866411    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:07.877686    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:07.877698    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:07.889047    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:07.889057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:07.902621    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:07.902631    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:07.927838    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:07.927845    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:07.953381    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:07.953391    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:07.968218    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:07.968227    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:07.987135    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:07.987146    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:08.002107    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:08.002117    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:08.013776    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:08.013785    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:10.528102    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:15.530371    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:15.530535    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:15.543712    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:15.543801    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:15.556164    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:15.556247    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:15.568121    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:15.568195    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:15.579813    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:15.579889    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:15.593844    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:15.593930    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:15.609871    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:15.609954    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:15.621176    8424 logs.go:282] 0 containers: []
	W1007 05:01:15.621189    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:15.621256    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:15.632592    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:15.632611    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:15.632617    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:15.645009    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:15.645022    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:15.660797    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:15.660811    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:15.674914    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:15.674925    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:15.711222    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:15.711234    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:15.726009    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:15.726025    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:15.737561    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:15.737573    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:15.751229    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:15.751240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:15.766094    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:15.766104    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:15.778009    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:15.778018    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:15.795985    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:15.795995    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:15.832410    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:15.832417    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:15.856229    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:15.856240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:15.870375    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:15.870385    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:15.885143    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:15.885153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:15.909699    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:15.909706    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:15.914301    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:15.914308    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:18.428519    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:23.431291    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:23.431419    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:23.445135    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:23.445214    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:23.456616    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:23.456697    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:23.467071    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:23.467147    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:23.477881    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:23.477953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:23.488419    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:23.488495    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:23.499055    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:23.499134    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:23.509088    8424 logs.go:282] 0 containers: []
	W1007 05:01:23.509099    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:23.509168    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:23.520306    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:23.520326    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:23.520331    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:23.537515    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:23.537529    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:23.550676    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:23.550689    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:23.574895    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:23.574906    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:23.579107    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:23.579114    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:23.594667    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:23.594680    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:23.633229    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:23.633245    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:23.647138    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:23.647150    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:23.658970    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:23.658981    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:23.674602    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:23.674614    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:23.686203    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:23.686217    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:23.700688    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:23.700697    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:23.726379    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:23.726398    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:23.747182    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:23.747199    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:23.759975    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:23.759988    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:23.773083    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:23.773095    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:23.814450    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:23.814470    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:26.332118    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:31.334459    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:31.334622    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:31.349075    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:31.349189    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:31.361568    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:31.361654    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:31.371936    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:31.372009    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:31.382574    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:31.382643    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:31.393248    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:31.393321    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:31.403504    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:31.403591    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:31.413391    8424 logs.go:282] 0 containers: []
	W1007 05:01:31.413404    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:31.413469    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:31.423696    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:31.423718    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:31.423722    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:31.462393    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:31.462404    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:31.501473    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:31.501489    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:31.514267    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:31.514279    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:31.538405    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:31.538420    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:31.554798    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:31.554811    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:31.569195    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:31.569214    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:31.589160    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:31.589174    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:31.606770    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:31.606785    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:31.611383    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:31.611390    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:31.636613    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:31.636624    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:31.650559    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:31.650570    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:31.666492    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:31.666505    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:31.680594    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:31.680607    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:31.705116    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:31.705135    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:31.730938    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:31.730957    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:31.744539    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:31.744553    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:34.260961    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:39.263160    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:39.263322    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:39.274874    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:39.274952    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:39.289325    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:39.289399    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:39.299634    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:39.299708    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:39.311011    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:39.311092    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:39.322131    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:39.322202    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:39.335876    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:39.335952    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:39.346228    8424 logs.go:282] 0 containers: []
	W1007 05:01:39.346241    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:39.346298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:39.357445    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:39.357471    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:39.357477    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:39.371983    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:39.371995    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:39.383625    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:39.383637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:39.401837    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:39.401847    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:39.413396    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:39.413413    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:39.436097    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:39.436106    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:39.472641    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:39.472653    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:39.477230    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:39.477236    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:39.490051    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:39.490063    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:39.501829    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:39.501840    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:39.513636    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:39.513650    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:39.550501    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:39.550512    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:39.564665    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:39.564679    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:39.588218    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:39.588233    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:39.602519    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:39.602528    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:39.617114    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:39.617127    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:39.631182    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:39.631196    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:42.146142    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:47.148478    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:47.148635    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:47.159943    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:47.160025    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:47.170528    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:47.170609    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:47.181689    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:47.181771    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:47.195610    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:47.195684    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:47.206565    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:47.206640    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:47.217526    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:47.217598    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:47.227780    8424 logs.go:282] 0 containers: []
	W1007 05:01:47.227797    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:47.227867    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:47.238656    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:47.238674    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:47.238678    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:47.250787    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:47.250803    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:47.262219    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:47.262233    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:47.273933    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:47.273948    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:47.312254    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:47.312267    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:47.324435    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:47.324446    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:47.343587    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:47.343599    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:47.358810    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:47.358821    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:47.383473    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:47.383483    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:47.395544    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:47.395559    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:47.419147    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:47.419164    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:47.460697    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:47.460714    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:47.479297    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:47.479311    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:47.494650    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:47.494661    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:47.519551    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:47.519566    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:47.529282    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:47.529296    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:47.553937    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:47.553951    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:50.068299    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:55.069774    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:55.070090    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:55.092213    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:55.092326    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:55.113870    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:55.113953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:55.125401    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:55.125480    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:55.136636    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:55.136714    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:55.147390    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:55.147468    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:55.158157    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:55.158236    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:55.168597    8424 logs.go:282] 0 containers: []
	W1007 05:01:55.168609    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:55.168674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:55.179337    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:55.179356    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:55.179362    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:55.195329    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:55.195343    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:55.209282    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:55.209292    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:55.220930    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:55.220941    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:55.232369    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:55.232379    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:55.249978    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:55.249987    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:55.263949    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:55.263964    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:55.288003    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:55.288014    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:55.305327    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:55.305336    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:55.316904    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:55.316914    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:55.328428    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:55.328439    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:55.352609    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:55.352621    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:55.364440    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:55.364451    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:55.402389    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:55.402398    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:55.406747    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:55.406754    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:55.442206    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:55.442216    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:55.456060    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:55.456071    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:57.970219    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:02.972508    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:02.972619    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:02.991980    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:02.992074    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:03.003091    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:03.003175    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:03.014205    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:03.014298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:03.025875    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:03.025966    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:03.037317    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:03.037394    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:03.048651    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:03.048724    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:03.059730    8424 logs.go:282] 0 containers: []
	W1007 05:02:03.059742    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:03.059818    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:03.070782    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:03.070803    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:03.070811    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:03.094177    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:03.094186    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:03.113594    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:03.113610    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:03.118588    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:03.118594    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:03.153400    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:03.153410    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:03.167869    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:03.167885    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:03.183051    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:03.183068    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:03.195705    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:03.195718    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:03.207545    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:03.207560    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:03.244849    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:03.244857    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:03.259656    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:03.259668    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:03.271594    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:03.271605    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:03.288969    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:03.288984    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:03.305576    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:03.305588    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:03.329406    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:03.329420    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:03.341110    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:03.341125    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:03.353262    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:03.353273    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:05.869191    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:10.871617    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:10.871777    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:10.883416    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:10.883504    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:10.895558    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:10.895640    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:10.906245    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:10.906325    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:10.916594    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:10.916674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:10.927588    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:10.927666    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:10.938214    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:10.938293    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:10.951022    8424 logs.go:282] 0 containers: []
	W1007 05:02:10.951035    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:10.951102    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:10.961453    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:10.961475    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:10.961481    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:10.973561    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:10.973574    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:10.997562    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:10.997573    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:11.011046    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:11.011056    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:11.026705    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:11.026715    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:11.038399    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:11.038411    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:11.061131    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:11.061140    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:11.065774    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:11.065783    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:11.082504    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:11.082518    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:11.094247    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:11.094257    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:11.132269    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:11.132281    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:11.146957    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:11.146971    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:11.161384    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:11.161395    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:11.173804    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:11.173818    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:11.207362    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:11.207378    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:11.221670    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:11.221680    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:11.233250    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:11.233261    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:13.752769    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:18.755371    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:18.755580    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:18.770001    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:18.770097    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:18.781718    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:18.781798    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:18.795002    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:18.795083    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:18.805457    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:18.805529    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:18.816198    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:18.816296    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:18.827118    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:18.827201    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:18.841013    8424 logs.go:282] 0 containers: []
	W1007 05:02:18.841023    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:18.841089    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:18.851605    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:18.851622    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:18.851627    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:18.856480    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:18.856489    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:18.870452    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:18.870463    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:18.885142    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:18.885152    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:18.899111    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:18.899121    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:18.936622    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:18.936637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:18.951347    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:18.951356    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:18.962876    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:18.962887    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:18.974399    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:18.974413    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:18.997387    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:18.997395    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:19.021440    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:19.021452    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:19.036788    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:19.036801    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:19.076059    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:19.076069    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:19.095719    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:19.095731    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:19.113243    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:19.113254    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:19.125038    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:19.125048    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:19.142726    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:19.142737    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:21.656762    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:26.659123    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:26.659238    8424 kubeadm.go:597] duration metric: took 4m4.38473425s to restartPrimaryControlPlane
	W1007 05:02:26.659324    8424 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:02:26.659361    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:02:27.690694    8424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031316625s)
	I1007 05:02:27.690768    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:02:27.695925    8424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:02:27.698727    8424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:02:27.701579    8424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:02:27.701585    8424 kubeadm.go:157] found existing configuration files:
	
	I1007 05:02:27.701614    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I1007 05:02:27.704645    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:02:27.704680    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:02:27.707381    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I1007 05:02:27.709854    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:02:27.709886    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:02:27.713349    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I1007 05:02:27.716410    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:02:27.716443    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:02:27.719139    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I1007 05:02:27.721974    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:02:27.722001    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:02:27.725169    8424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:02:27.743582    8424 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:02:27.743644    8424 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:02:27.792586    8424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:02:27.792675    8424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:02:27.792725    8424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:02:27.846206    8424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:02:27.849554    8424 out.go:235]   - Generating certificates and keys ...
	I1007 05:02:27.849586    8424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:02:27.849614    8424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:02:27.849661    8424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:02:27.849697    8424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:02:27.849738    8424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:02:27.849793    8424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:02:27.849842    8424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:02:27.849877    8424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:02:27.849927    8424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:02:27.849970    8424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:02:27.849990    8424 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:02:27.850018    8424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:02:27.942586    8424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:02:28.217460    8424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:02:28.283996    8424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:02:28.557780    8424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:02:28.585423    8424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:02:28.585833    8424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:02:28.585854    8424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:02:28.673637    8424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:02:28.677818    8424 out.go:235]   - Booting up control plane ...
	I1007 05:02:28.677865    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:02:28.677906    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:02:28.677945    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:02:28.677988    8424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:02:28.679680    8424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:02:33.185755    8424 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505681 seconds
	I1007 05:02:33.185870    8424 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:02:33.191902    8424 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:02:33.701065    8424 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:02:33.701170    8424 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-802000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:02:34.205909    8424 kubeadm.go:310] [bootstrap-token] Using token: tdjbgm.u9rqa1rmq6a14rbm
	I1007 05:02:34.211588    8424 out.go:235]   - Configuring RBAC rules ...
	I1007 05:02:34.211646    8424 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:02:34.211686    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:02:34.216153    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:02:34.217251    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:02:34.218135    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:02:34.218959    8424 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:02:34.222186    8424 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:02:34.381729    8424 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:02:34.609850    8424 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:02:34.610338    8424 kubeadm.go:310] 
	I1007 05:02:34.610368    8424 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:02:34.610377    8424 kubeadm.go:310] 
	I1007 05:02:34.610422    8424 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:02:34.610425    8424 kubeadm.go:310] 
	I1007 05:02:34.610441    8424 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:02:34.610476    8424 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:02:34.610505    8424 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:02:34.610509    8424 kubeadm.go:310] 
	I1007 05:02:34.610534    8424 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:02:34.610537    8424 kubeadm.go:310] 
	I1007 05:02:34.610563    8424 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:02:34.610570    8424 kubeadm.go:310] 
	I1007 05:02:34.610597    8424 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:02:34.610645    8424 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:02:34.610684    8424 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:02:34.610687    8424 kubeadm.go:310] 
	I1007 05:02:34.610741    8424 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:02:34.610792    8424 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:02:34.610795    8424 kubeadm.go:310] 
	I1007 05:02:34.610856    8424 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tdjbgm.u9rqa1rmq6a14rbm \
	I1007 05:02:34.610913    8424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 \
	I1007 05:02:34.610930    8424 kubeadm.go:310] 	--control-plane 
	I1007 05:02:34.610937    8424 kubeadm.go:310] 
	I1007 05:02:34.610988    8424 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:02:34.610991    8424 kubeadm.go:310] 
	I1007 05:02:34.611031    8424 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tdjbgm.u9rqa1rmq6a14rbm \
	I1007 05:02:34.611080    8424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 
	I1007 05:02:34.611249    8424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:02:34.611258    8424 cni.go:84] Creating CNI manager for ""
	I1007 05:02:34.611267    8424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:02:34.614429    8424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:02:34.622500    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:02:34.625885    8424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:02:34.630928    8424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:02:34.631018    8424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:02:34.631019    8424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-802000 minikube.k8s.io/updated_at=2024_10_07T05_02_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=running-upgrade-802000 minikube.k8s.io/primary=true
	I1007 05:02:34.674475    8424 ops.go:34] apiserver oom_adj: -16
	I1007 05:02:34.674523    8424 kubeadm.go:1113] duration metric: took 43.552417ms to wait for elevateKubeSystemPrivileges
	I1007 05:02:34.674535    8424 kubeadm.go:394] duration metric: took 4m12.414154209s to StartCluster
	I1007 05:02:34.674545    8424 settings.go:142] acquiring lock: {Name:mk5872a0c73b3208924793fa59bf550628bdf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:02:34.674748    8424 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:02:34.675128    8424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:02:34.675321    8424 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:02:34.675334    8424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:02:34.675367    8424 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-802000"
	I1007 05:02:34.675389    8424 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-802000"
	I1007 05:02:34.675405    8424 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-802000"
	I1007 05:02:34.675422    8424 config.go:182] Loaded profile config "running-upgrade-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:02:34.675414    8424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-802000"
	W1007 05:02:34.675423    8424 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:02:34.675484    8424 host.go:66] Checking if "running-upgrade-802000" exists ...
	I1007 05:02:34.679440    8424 out.go:177] * Verifying Kubernetes components...
	I1007 05:02:34.680169    8424 kapi.go:59] client config for running-upgrade-802000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10235bae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:02:34.682655    8424 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-802000"
	W1007 05:02:34.682660    8424 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:02:34.682669    8424 host.go:66] Checking if "running-upgrade-802000" exists ...
	I1007 05:02:34.683196    8424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:02:34.683201    8424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:02:34.683206    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 05:02:34.685416    8424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:02:34.689466    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:02:34.695434    8424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:02:34.695441    8424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:02:34.695448    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 05:02:34.789477    8424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:02:34.795411    8424 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:02:34.795474    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:02:34.799819    8424 api_server.go:72] duration metric: took 124.485917ms to wait for apiserver process to appear ...
	I1007 05:02:34.799829    8424 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:02:34.799835    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:34.817736    8424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:02:34.841710    8424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:02:35.147347    8424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:02:35.147359    8424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:02:39.801937    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:39.801998    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:44.802344    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:44.802372    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:49.802753    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:49.802774    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:54.803189    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:54.803214    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:59.803758    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:59.803788    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:04.804505    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:04.804548    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:03:05.149740    8424 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:03:05.154736    8424 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:03:05.163598    8424 addons.go:510] duration metric: took 30.488348958s for enable addons: enabled=[storage-provisioner]
	I1007 05:03:09.805509    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:09.805581    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:14.806940    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:14.806980    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:19.807626    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:19.807650    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:24.809338    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:24.809403    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:29.811561    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:29.811592    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:34.813808    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:34.813976    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:34.825874    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:34.825943    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:34.836478    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:34.836557    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:34.847514    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:34.847588    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:34.858033    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:34.858101    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:34.868885    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:34.868959    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:34.880256    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:34.880329    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:34.893664    8424 logs.go:282] 0 containers: []
	W1007 05:03:34.893677    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:34.893740    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:34.904126    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:34.904148    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:34.904154    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:34.917826    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:34.917836    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:34.923008    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:34.923019    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:34.958066    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:34.958077    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:34.969867    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:34.969878    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:34.981294    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:34.981304    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:34.996230    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:34.996240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:35.008106    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:35.008116    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:35.032778    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:35.032786    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:35.065128    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:35.065135    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:35.079637    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:35.079647    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:35.093799    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:35.093810    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:35.106475    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:35.106486    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:37.625972    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:42.628620    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:42.628897    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:42.647253    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:42.647344    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:42.660808    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:42.660895    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:42.671893    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:42.671969    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:42.682289    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:42.682370    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:42.692907    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:42.692986    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:42.703679    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:42.703759    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:42.717246    8424 logs.go:282] 0 containers: []
	W1007 05:03:42.717258    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:42.717321    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:42.727715    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:42.727730    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:42.727737    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:42.742901    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:42.742916    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:42.754939    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:42.754955    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:42.789846    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:42.789853    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:42.794488    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:42.794494    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:42.830894    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:42.830910    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:42.845378    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:42.845390    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:42.857302    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:42.857313    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:42.868340    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:42.868354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:42.893430    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:42.893442    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:42.905782    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:42.905799    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:42.919755    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:42.919769    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:42.934556    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:42.934566    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:45.454662    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:50.457052    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:50.457322    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:50.484189    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:50.484340    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:50.501199    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:50.501287    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:50.514759    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:50.514859    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:50.526676    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:50.526750    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:50.537187    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:50.537268    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:50.547915    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:50.547993    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:50.557579    8424 logs.go:282] 0 containers: []
	W1007 05:03:50.557590    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:50.557650    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:50.568079    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:50.568094    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:50.568099    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:50.586738    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:50.586752    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:50.598844    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:50.598854    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:50.610347    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:50.610358    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:50.628687    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:50.628700    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:50.633286    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:50.633292    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:50.647262    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:50.647272    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:50.662816    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:50.662832    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:50.682620    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:50.682631    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:50.694410    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:50.694425    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:50.720531    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:50.720541    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:50.731779    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:50.731790    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:50.767214    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:50.767222    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:53.343727    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:58.346073    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:58.346428    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:58.376989    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:58.377125    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:58.399027    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:58.399111    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:58.417079    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:58.417158    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:58.427867    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:58.427934    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:58.438656    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:58.438721    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:58.449802    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:58.449880    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:58.461812    8424 logs.go:282] 0 containers: []
	W1007 05:03:58.461824    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:58.461890    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:58.472243    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:58.472260    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:58.472266    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:58.484184    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:58.484200    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:58.495495    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:58.495508    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:58.507563    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:58.507574    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:58.519063    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:58.519074    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:58.524037    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:58.524045    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:58.564486    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:58.564497    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:58.578906    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:58.578916    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:58.599049    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:58.599060    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:58.625301    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:58.625329    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:58.636445    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:58.636456    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:58.672047    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:58.672057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:58.689879    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:58.689891    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:01.206944    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:06.209310    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:06.209550    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:06.230596    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:06.230690    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:06.243358    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:06.243438    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:06.254385    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:06.254459    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:06.264687    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:06.264765    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:06.274951    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:06.275036    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:06.285068    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:06.285140    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:06.295127    8424 logs.go:282] 0 containers: []
	W1007 05:04:06.295139    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:06.295214    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:06.305674    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:06.305691    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:06.305696    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:06.323207    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:06.323217    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:06.347745    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:06.347755    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:06.383217    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:06.383226    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:06.398440    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:06.398449    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:06.424792    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:06.424805    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:06.437201    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:06.437209    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:06.455586    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:06.455597    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:06.469834    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:06.469850    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:06.482453    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:06.482464    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:06.516155    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:06.516163    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:06.520835    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:06.520845    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:06.535181    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:06.535190    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:09.048971    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:14.051197    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:14.051409    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:14.065648    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:14.065743    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:14.077023    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:14.077100    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:14.087724    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:14.087801    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:14.099127    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:14.099206    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:14.110516    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:14.110598    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:14.121689    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:14.121762    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:14.131852    8424 logs.go:282] 0 containers: []
	W1007 05:04:14.131865    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:14.131929    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:14.142024    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:14.142040    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:14.142045    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:14.156296    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:14.156310    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:14.167653    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:14.167663    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:14.189390    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:14.189400    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:14.213146    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:14.213155    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:14.246815    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:14.246824    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:14.283026    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:14.283037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:14.294886    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:14.294898    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:14.309959    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:14.309972    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:14.321234    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:14.321243    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:14.332853    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:14.332867    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:14.343901    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:14.343914    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:14.348509    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:14.348515    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:16.864360    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:21.866637    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:21.866832    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:21.881221    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:21.881308    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:21.891756    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:21.891841    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:21.902956    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:21.903031    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:21.913603    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:21.913674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:21.923864    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:21.923950    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:21.935182    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:21.935253    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:21.953382    8424 logs.go:282] 0 containers: []
	W1007 05:04:21.953392    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:21.953451    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:21.963676    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:21.963689    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:21.963694    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:21.996630    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:21.996637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:22.008192    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:22.008202    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:22.023062    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:22.023073    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:22.046421    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:22.046427    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:22.068686    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:22.068699    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:22.072996    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:22.073004    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:22.132706    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:22.132716    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:22.147242    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:22.147253    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:22.161461    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:22.161475    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:22.173156    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:22.173168    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:22.187863    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:22.187877    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:22.199606    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:22.199616    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:24.712150    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:29.713974    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:29.714222    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:29.733624    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:29.733723    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:29.748893    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:29.748977    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:29.761650    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:29.761733    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:29.777434    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:29.777516    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:29.790438    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:29.790532    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:29.802229    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:29.802308    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:29.812775    8424 logs.go:282] 0 containers: []
	W1007 05:04:29.812786    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:29.812855    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:29.823313    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:29.823328    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:29.823336    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:29.858151    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:29.858163    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:29.872657    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:29.872672    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:29.886035    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:29.886046    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:29.898143    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:29.898154    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:29.917941    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:29.917950    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:29.933316    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:29.933324    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:29.957624    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:29.957632    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:29.990184    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:29.990192    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:30.002051    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:30.002066    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:30.018008    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:30.018022    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:30.035660    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:30.035673    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:30.047228    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:30.047243    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:32.554109    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:37.556356    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:37.556611    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:37.581728    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:37.581831    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:37.595096    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:37.595177    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:37.605897    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:37.605969    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:37.616741    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:37.616818    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:37.627438    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:37.627518    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:37.641307    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:37.641422    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:37.652432    8424 logs.go:282] 0 containers: []
	W1007 05:04:37.652448    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:37.652519    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:37.663349    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:37.663366    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:37.663372    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:37.677921    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:37.677930    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:37.689866    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:37.689877    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:37.702191    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:37.702201    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:37.717047    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:37.717055    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:37.728962    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:37.728973    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:37.752913    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:37.752921    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:37.787285    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:37.787301    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:37.792139    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:37.792144    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:37.806638    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:37.806651    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:37.823792    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:37.823801    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:37.839882    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:37.839895    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:37.854114    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:37.854127    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:40.389427    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:45.391969    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:45.392207    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:45.409028    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:45.409130    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:45.421319    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:45.421394    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:45.431949    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:45.432029    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:45.442637    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:45.442718    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:45.453259    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:45.453344    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:45.463281    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:45.463348    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:45.474412    8424 logs.go:282] 0 containers: []
	W1007 05:04:45.474425    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:45.474491    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:45.486336    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:45.486352    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:45.486358    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:45.491319    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:45.491326    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:45.527765    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:45.527777    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:45.542470    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:45.542484    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:45.556000    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:45.556013    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:45.573424    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:45.573435    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:45.590281    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:45.590292    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:45.625722    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:45.625730    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:45.638438    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:45.638450    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:45.667528    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:45.667539    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:45.702642    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:45.702655    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:45.724753    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:45.724768    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:45.756568    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:45.756592    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:48.284347    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:53.285068    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:53.285298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:53.304287    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:53.304390    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:53.317691    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:53.317768    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:53.329476    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:04:53.329552    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:53.340300    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:53.340378    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:53.350302    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:53.350378    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:53.361029    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:53.361095    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:53.371190    8424 logs.go:282] 0 containers: []
	W1007 05:04:53.371202    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:53.371273    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:53.381288    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:53.381307    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:53.381313    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:53.385888    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:53.385894    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:53.400241    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:53.400252    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:53.412084    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:53.412096    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:53.423664    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:53.423674    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:53.435012    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:53.435023    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:53.473739    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:04:53.473750    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:04:53.485477    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:53.485488    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:53.497760    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:53.497770    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:53.510493    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:53.510504    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:53.530009    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:53.530019    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:53.547299    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:53.547309    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:53.580970    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:53.580980    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:53.596234    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:04:53.596244    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:04:53.607663    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:53.607675    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:56.135125    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:01.136145    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:01.136364    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:01.154141    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:01.154241    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:01.167534    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:01.167621    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:01.178555    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:01.178633    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:01.190363    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:01.190441    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:01.200811    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:01.200899    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:01.211178    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:01.211254    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:01.221553    8424 logs.go:282] 0 containers: []
	W1007 05:05:01.221564    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:01.221629    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:01.232303    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:01.232323    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:01.232328    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:01.268370    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:01.268386    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:01.292045    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:01.292057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:01.307127    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:01.307142    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:01.319155    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:01.319166    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:01.337097    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:01.337108    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:01.348976    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:01.348986    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:01.383398    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:01.383409    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:01.387915    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:01.387920    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:01.408154    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:01.408166    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:01.422759    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:01.422771    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:01.435227    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:01.435237    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:01.447664    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:01.447677    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:01.459148    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:01.459159    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:01.473014    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:01.473024    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:04.000241    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:09.002572    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:09.002789    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:09.016635    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:09.016727    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:09.027592    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:09.027670    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:09.038719    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:09.038802    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:09.049398    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:09.049485    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:09.060538    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:09.060612    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:09.070825    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:09.070978    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:09.080954    8424 logs.go:282] 0 containers: []
	W1007 05:05:09.080964    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:09.081021    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:09.091269    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:09.091286    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:09.091300    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:09.096062    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:09.096076    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:09.108751    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:09.108764    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:09.120883    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:09.120895    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:09.133086    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:09.133102    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:09.147247    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:09.147262    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:09.159052    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:09.159067    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:09.170197    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:09.170208    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:09.195480    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:09.195487    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:09.230748    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:09.230756    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:09.245078    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:09.245087    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:09.263506    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:09.263515    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:09.298978    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:09.298992    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:09.311124    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:09.311134    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:09.329850    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:09.329861    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:11.845911    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:16.848189    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:16.848314    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:16.860802    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:16.860890    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:16.871686    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:16.871765    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:16.883679    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:16.883771    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:16.894502    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:16.894579    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:16.905742    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:16.905816    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:16.916209    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:16.916295    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:16.927003    8424 logs.go:282] 0 containers: []
	W1007 05:05:16.927018    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:16.927083    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:16.937945    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:16.937963    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:16.937970    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:16.943098    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:16.943106    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:16.977585    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:16.977599    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:16.992842    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:16.992856    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:17.009344    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:17.009354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:17.034582    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:17.034590    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:17.069130    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:17.069138    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:17.080433    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:17.080448    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:17.092398    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:17.092408    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:17.109309    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:17.109320    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:17.121295    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:17.121310    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:17.133165    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:17.133181    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:17.154944    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:17.154954    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:17.174995    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:17.175008    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:17.187495    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:17.187505    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:19.701826    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:24.702397    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:24.702637    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:24.732265    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:24.732358    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:24.744856    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:24.744930    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:24.757687    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:24.757770    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:24.769462    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:24.769552    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:24.780101    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:24.780175    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:24.790552    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:24.790633    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:24.800841    8424 logs.go:282] 0 containers: []
	W1007 05:05:24.800854    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:24.800918    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:24.810993    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:24.811010    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:24.811016    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:24.830615    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:24.830627    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:24.842244    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:24.842255    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:24.864070    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:24.864086    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:24.875810    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:24.875826    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:24.895036    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:24.895048    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:24.907407    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:24.907424    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:24.932442    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:24.932451    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:24.966664    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:24.966677    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:24.983433    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:24.983447    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:24.995098    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:24.995109    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:25.010215    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:25.010224    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:25.045125    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:25.045134    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:25.049271    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:25.049278    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:25.064600    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:25.064611    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:27.578760    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:32.581173    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:32.581353    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:32.593526    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:32.593618    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:32.604336    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:32.604419    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:32.614988    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:32.615065    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:32.625539    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:32.625622    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:32.637298    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:32.637381    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:32.647793    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:32.647873    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:32.659770    8424 logs.go:282] 0 containers: []
	W1007 05:05:32.659785    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:32.659856    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:32.670768    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:32.670787    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:32.670793    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:32.706746    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:32.706758    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:32.718685    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:32.718698    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:32.730745    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:32.730757    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:32.743637    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:32.743648    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:32.763663    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:32.763674    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:32.776142    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:32.776153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:32.801749    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:32.801757    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:32.806714    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:32.806720    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:32.824822    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:32.824848    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:32.837288    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:32.837300    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:32.875035    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:32.875046    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:32.889239    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:32.889249    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:32.903084    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:32.903094    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:32.917883    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:32.917894    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:35.431816    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:40.432540    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:40.432615    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:40.444261    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:40.444336    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:40.455373    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:40.455450    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:40.467953    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:40.468035    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:40.478418    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:40.478500    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:40.493120    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:40.493203    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:40.506632    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:40.506711    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:40.517984    8424 logs.go:282] 0 containers: []
	W1007 05:05:40.517997    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:40.518059    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:40.529667    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:40.529685    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:40.529692    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:40.569860    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:40.569869    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:40.589434    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:40.589443    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:40.602866    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:40.602879    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:40.616485    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:40.616501    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:40.634404    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:40.634418    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:40.647239    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:40.647250    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:40.673332    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:40.673352    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:40.686664    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:40.686676    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:40.723201    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:40.723215    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:40.737813    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:40.737828    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:40.750250    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:40.750262    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:40.767357    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:40.767371    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:40.772495    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:40.772507    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:40.787228    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:40.787240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:43.301347    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:48.301662    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:48.301775    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:48.320203    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:48.320301    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:48.331308    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:48.331388    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:48.342213    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:48.342291    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:48.352810    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:48.352889    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:48.363312    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:48.363385    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:48.374372    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:48.374441    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:48.388435    8424 logs.go:282] 0 containers: []
	W1007 05:05:48.388450    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:48.388518    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:48.399451    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:48.399467    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:48.399473    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:48.422921    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:48.422930    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:48.438210    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:48.438222    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:48.449559    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:48.449575    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:48.475845    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:48.475855    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:48.493700    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:48.493711    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:48.526710    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:48.526722    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:48.531056    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:48.531064    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:48.548407    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:48.548419    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:48.564792    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:48.564807    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:48.576351    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:48.576362    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:48.589836    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:48.589848    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:48.602199    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:48.602210    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:48.614413    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:48.614429    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:48.649646    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:48.649657    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:51.166119    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:56.168414    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:56.168618    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:56.190997    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:56.191130    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:56.207034    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:56.207120    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:56.220450    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:56.220532    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:56.231054    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:56.231133    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:56.241316    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:56.241395    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:56.251562    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:56.251638    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:56.265428    8424 logs.go:282] 0 containers: []
	W1007 05:05:56.265438    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:56.265501    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:56.276325    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:56.276348    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:56.276354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:56.281108    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:56.281117    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:56.293459    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:56.293470    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:56.326327    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:56.326335    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:56.341453    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:56.341466    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:56.353605    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:56.353616    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:56.379007    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:56.379014    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:56.391018    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:56.391029    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:56.404245    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:56.404257    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:56.420465    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:56.420478    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:56.433442    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:56.433453    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:56.475367    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:56.475382    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:56.489394    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:56.489403    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:56.503821    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:56.503836    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:56.520442    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:56.520458    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:59.040491    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:04.042803    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:04.043040    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:04.063090    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:04.063208    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:04.077316    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:04.077409    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:04.089972    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:04.090049    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:04.100786    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:04.100867    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:04.111415    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:04.111493    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:04.121991    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:04.122064    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:04.132216    8424 logs.go:282] 0 containers: []
	W1007 05:06:04.132228    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:04.132295    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:04.142741    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:04.142759    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:04.142765    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:04.177169    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:04.177183    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:04.192094    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:04.192105    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:04.204141    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:04.204153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:04.216247    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:04.216259    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:04.220849    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:04.220859    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:04.234596    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:04.234608    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:04.246637    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:04.246653    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:04.265248    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:04.265259    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:04.277360    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:04.277372    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:04.301680    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:04.301689    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:04.336093    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:04.336118    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:04.348220    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:04.348229    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:04.360927    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:04.360942    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:04.372288    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:04.372299    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:06.888767    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:11.890934    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:11.891026    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:11.901709    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:11.901785    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:11.912600    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:11.912687    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:11.922791    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:11.922864    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:11.933438    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:11.933516    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:11.949719    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:11.949788    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:11.959958    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:11.960032    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:11.970726    8424 logs.go:282] 0 containers: []
	W1007 05:06:11.970736    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:11.970796    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:11.981371    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:11.981392    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:11.981397    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:11.986032    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:11.986041    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:11.997807    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:11.997819    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:12.015869    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:12.015881    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:12.027320    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:12.027335    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:12.059992    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:12.060003    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:12.071737    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:12.071749    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:12.090456    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:12.090473    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:12.102225    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:12.102238    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:12.120619    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:12.120634    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:12.139065    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:12.139075    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:12.163485    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:12.163495    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:12.197821    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:12.197832    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:12.212869    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:12.212881    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:12.225225    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:12.225239    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:14.741907    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:19.744270    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:19.744487    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:19.757999    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:19.758092    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:19.769077    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:19.769151    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:19.780310    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:19.780398    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:19.792325    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:19.792401    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:19.807415    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:19.807487    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:19.817653    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:19.817792    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:19.828650    8424 logs.go:282] 0 containers: []
	W1007 05:06:19.828659    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:19.828720    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:19.839376    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:19.839391    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:19.839396    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:19.843665    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:19.843672    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:19.858708    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:19.858716    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:19.870630    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:19.870639    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:19.882136    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:19.882149    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:19.918735    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:19.918746    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:19.933175    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:19.933189    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:19.945076    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:19.945086    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:19.960911    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:19.960919    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:19.972699    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:19.972708    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:20.006505    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:20.006512    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:20.017783    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:20.017791    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:20.029025    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:20.029035    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:20.041489    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:20.041500    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:20.058935    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:20.058950    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:22.584127    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:27.586453    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:27.586616    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:27.601999    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:27.602096    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:27.613157    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:27.613234    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:27.624376    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:27.624455    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:27.634585    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:27.634666    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:27.645238    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:27.645314    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:27.656132    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:27.656205    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:27.666550    8424 logs.go:282] 0 containers: []
	W1007 05:06:27.666562    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:27.666627    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:27.677178    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:27.677197    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:27.677202    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:27.690940    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:27.690949    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:27.702268    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:27.702279    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:27.714025    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:27.714037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:27.729120    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:27.729129    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:27.746322    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:27.746338    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:27.757843    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:27.757852    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:27.782385    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:27.782395    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:27.817434    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:27.817441    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:27.833435    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:27.833447    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:27.846123    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:27.846136    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:27.858312    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:27.858327    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:27.862775    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:27.862781    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:27.898569    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:27.898582    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:27.913385    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:27.913399    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:30.425814    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:35.428038    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:35.432208    8424 out.go:201] 
	W1007 05:06:35.436015    8424 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:06:35.436022    8424 out.go:270] * 
	* 
	W1007 05:06:35.436543    8424 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:06:35.451974    8424 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-802000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-07 05:06:35.54359 -0700 PDT m=+1385.759960626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-802000 -n running-upgrade-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-802000 -n running-upgrade-802000: exit status 2 (15.672087541s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-802000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-956000          | force-systemd-flag-956000 | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-994000              | force-systemd-env-994000  | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-994000           | force-systemd-env-994000  | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT | 07 Oct 24 04:56 PDT |
	| start   | -p docker-flags-879000                | docker-flags-879000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-956000             | force-systemd-flag-956000 | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-956000          | force-systemd-flag-956000 | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT | 07 Oct 24 04:56 PDT |
	| start   | -p cert-expiration-557000             | cert-expiration-557000    | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-879000 ssh               | docker-flags-879000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-879000 ssh               | docker-flags-879000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-879000                | docker-flags-879000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT | 07 Oct 24 04:56 PDT |
	| start   | -p cert-options-287000                | cert-options-287000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-287000 ssh               | cert-options-287000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-287000 -- sudo        | cert-options-287000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-287000                | cert-options-287000       | jenkins | v1.34.0 | 07 Oct 24 04:56 PDT | 07 Oct 24 04:56 PDT |
	| start   | -p running-upgrade-802000             | minikube                  | jenkins | v1.26.0 | 07 Oct 24 04:56 PDT | 07 Oct 24 04:58 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-802000             | running-upgrade-802000    | jenkins | v1.34.0 | 07 Oct 24 04:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-557000             | cert-expiration-557000    | jenkins | v1.34.0 | 07 Oct 24 04:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-557000             | cert-expiration-557000    | jenkins | v1.34.0 | 07 Oct 24 04:59 PDT | 07 Oct 24 04:59 PDT |
	| start   | -p kubernetes-upgrade-530000          | kubernetes-upgrade-530000 | jenkins | v1.34.0 | 07 Oct 24 04:59 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-530000          | kubernetes-upgrade-530000 | jenkins | v1.34.0 | 07 Oct 24 05:00 PDT | 07 Oct 24 05:00 PDT |
	| start   | -p kubernetes-upgrade-530000          | kubernetes-upgrade-530000 | jenkins | v1.34.0 | 07 Oct 24 05:00 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-530000          | kubernetes-upgrade-530000 | jenkins | v1.34.0 | 07 Oct 24 05:00 PDT | 07 Oct 24 05:00 PDT |
	| start   | -p stopped-upgrade-013000             | minikube                  | jenkins | v1.26.0 | 07 Oct 24 05:00 PDT | 07 Oct 24 05:00 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-013000 stop           | minikube                  | jenkins | v1.26.0 | 07 Oct 24 05:00 PDT | 07 Oct 24 05:01 PDT |
	| start   | -p stopped-upgrade-013000             | stopped-upgrade-013000    | jenkins | v1.34.0 | 07 Oct 24 05:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 05:01:01
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 05:01:01.379427    8853 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:01:01.379587    8853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:01:01.379591    8853 out.go:358] Setting ErrFile to fd 2...
	I1007 05:01:01.379593    8853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:01:01.379734    8853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:01:01.380984    8853 out.go:352] Setting JSON to false
	I1007 05:01:01.399807    8853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5432,"bootTime":1728297029,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:01:01.399871    8853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:01:01.404236    8853 out.go:177] * [stopped-upgrade-013000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:01:01.412247    8853 notify.go:220] Checking for updates...
	I1007 05:01:01.415254    8853 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:01:01.423195    8853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:00:59.724597    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:00:59.724877    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:00:59.744932    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:00:59.745041    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:00:59.759269    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:00:59.759349    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:00:59.771972    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:00:59.772052    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:00:59.782857    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:00:59.782940    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:00:59.793338    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:00:59.793408    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:00:59.803567    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:00:59.803630    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:00:59.813579    8424 logs.go:282] 0 containers: []
	W1007 05:00:59.813591    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:00:59.813661    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:00:59.825893    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:00:59.825913    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:00:59.825918    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:00:59.864512    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:00:59.864521    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:00:59.868684    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:00:59.868692    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:00:59.904368    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:00:59.904378    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:00:59.915868    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:00:59.915884    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:00:59.927361    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:00:59.927373    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:00:59.939296    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:00:59.939307    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:00:59.951309    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:00:59.951321    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:00:59.966386    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:00:59.966396    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:00:59.991738    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:00:59.991744    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:00.008693    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:00.008707    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:00.020512    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:00.020522    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:00.034505    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:00.034520    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:00.052372    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:00.052383    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:00.064669    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:00.064680    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:00.078289    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:00.078299    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:00.103461    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:00.103471    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:01.431185    8853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:01:01.438197    8853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:01:01.446205    8853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:01:01.454227    8853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:01:01.458596    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:01:01.463283    8853 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:01:01.467220    8853 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:01:01.470206    8853 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:01:01.478217    8853 start.go:297] selected driver: qemu2
	I1007 05:01:01.478223    8853 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:01.478276    8853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:01:01.481225    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:01:01.481268    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:01:01.481294    8853 start.go:340] cluster config:
	{Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:01.481352    8853 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:01:01.493212    8853 out.go:177] * Starting "stopped-upgrade-013000" primary control-plane node in "stopped-upgrade-013000" cluster
	I1007 05:01:01.497289    8853 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:01:01.497310    8853 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 05:01:01.497316    8853 cache.go:56] Caching tarball of preloaded images
	I1007 05:01:01.497403    8853 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:01:01.497409    8853 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 05:01:01.497475    8853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/config.json ...
	I1007 05:01:01.497858    8853 start.go:360] acquireMachinesLock for stopped-upgrade-013000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:01:01.497912    8853 start.go:364] duration metric: took 47.417µs to acquireMachinesLock for "stopped-upgrade-013000"
	I1007 05:01:01.497922    8853 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:01:01.497928    8853 fix.go:54] fixHost starting: 
	I1007 05:01:01.498053    8853 fix.go:112] recreateIfNeeded on stopped-upgrade-013000: state=Stopped err=<nil>
	W1007 05:01:01.498065    8853 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:01:01.501250    8853 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-013000" ...
	I1007 05:01:01.509210    8853 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:01:01.509328    8853 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51449-:22,hostfwd=tcp::51450-:2376,hostname=stopped-upgrade-013000 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/disk.qcow2
	I1007 05:01:01.561946    8853 main.go:141] libmachine: STDOUT: 
	I1007 05:01:01.561965    8853 main.go:141] libmachine: STDERR: 
	I1007 05:01:01.561972    8853 main.go:141] libmachine: Waiting for VM to start (ssh -p 51449 docker@127.0.0.1)...
	I1007 05:01:02.619734    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:07.622427    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:07.622606    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:07.634221    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:07.634297    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:07.648586    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:07.648675    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:07.659878    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:07.659953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:07.670831    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:07.670910    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:07.685514    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:07.685607    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:07.696072    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:07.696147    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:07.706172    8424 logs.go:282] 0 containers: []
	W1007 05:01:07.706183    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:07.706248    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:07.717205    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:07.717225    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:07.717230    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:07.754909    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:07.754919    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:07.759638    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:07.759645    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:07.800858    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:07.800870    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:07.819049    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:07.819058    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:07.833753    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:07.833764    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:07.848457    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:07.848468    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:07.866400    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:07.866411    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:07.877686    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:07.877698    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:07.889047    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:07.889057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:07.902621    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:07.902631    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:07.927838    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:07.927845    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:07.953381    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:07.953391    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:07.968218    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:07.968227    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:07.987135    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:07.987146    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:08.002107    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:08.002117    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:08.013776    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:08.013785    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:10.528102    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:15.530371    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:15.530535    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:15.543712    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:15.543801    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:15.556164    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:15.556247    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:15.568121    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:15.568195    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:15.579813    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:15.579889    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:15.593844    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:15.593930    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:15.609871    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:15.609954    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:15.621176    8424 logs.go:282] 0 containers: []
	W1007 05:01:15.621189    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:15.621256    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:15.632592    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:15.632611    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:15.632617    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:15.645009    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:15.645022    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:15.660797    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:15.660811    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:15.674914    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:15.674925    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:15.711222    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:15.711234    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:15.726009    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:15.726025    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:15.737561    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:15.737573    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:15.751229    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:15.751240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:15.766094    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:15.766104    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:15.778009    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:15.778018    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:15.795985    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:15.795995    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:15.832410    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:15.832417    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:15.856229    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:15.856240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:15.870375    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:15.870385    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:15.885143    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:15.885153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:15.909699    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:15.909706    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:15.914301    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:15.914308    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:21.279752    8853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/config.json ...
	I1007 05:01:21.280528    8853 machine.go:93] provisionDockerMachine start ...
	I1007 05:01:21.280747    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.281192    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.281206    8853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 05:01:21.368856    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 05:01:21.368886    8853 buildroot.go:166] provisioning hostname "stopped-upgrade-013000"
	I1007 05:01:21.369047    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.369296    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.369310    8853 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-013000 && echo "stopped-upgrade-013000" | sudo tee /etc/hostname
	I1007 05:01:18.428519    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:21.447394    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-013000
	
	I1007 05:01:21.447475    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.447637    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.447650    8853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-013000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-013000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-013000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 05:01:21.515328    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:01:21.515344    8853 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19763-6232/.minikube CaCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19763-6232/.minikube}
	I1007 05:01:21.515353    8853 buildroot.go:174] setting up certificates
	I1007 05:01:21.515359    8853 provision.go:84] configureAuth start
	I1007 05:01:21.515366    8853 provision.go:143] copyHostCerts
	I1007 05:01:21.515460    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem, removing ...
	I1007 05:01:21.515468    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem
	I1007 05:01:21.515610    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem (1082 bytes)
	I1007 05:01:21.515833    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem, removing ...
	I1007 05:01:21.515838    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem
	I1007 05:01:21.516021    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem (1123 bytes)
	I1007 05:01:21.516234    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem, removing ...
	I1007 05:01:21.516239    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem
	I1007 05:01:21.516310    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem (1679 bytes)
	I1007 05:01:21.516438    8853 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-013000 san=[127.0.0.1 localhost minikube stopped-upgrade-013000]
	I1007 05:01:21.555323    8853 provision.go:177] copyRemoteCerts
	I1007 05:01:21.555405    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 05:01:21.555415    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:21.589969    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 05:01:21.597803    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 05:01:21.604964    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 05:01:21.612202    8853 provision.go:87] duration metric: took 96.8305ms to configureAuth
	I1007 05:01:21.612210    8853 buildroot.go:189] setting minikube options for container-runtime
	I1007 05:01:21.612322    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:01:21.612375    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.612478    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.612483    8853 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 05:01:21.672209    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 05:01:21.672218    8853 buildroot.go:70] root file system type: tmpfs
	I1007 05:01:21.672269    8853 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 05:01:21.672346    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.672453    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.672490    8853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 05:01:21.735964    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 05:01:21.736025    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.736130    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.736140    8853 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 05:01:22.107535    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1007 05:01:22.107547    8853 machine.go:96] duration metric: took 827.011459ms to provisionDockerMachine
	I1007 05:01:22.107558    8853 start.go:293] postStartSetup for "stopped-upgrade-013000" (driver="qemu2")
	I1007 05:01:22.107565    8853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 05:01:22.107648    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 05:01:22.107660    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:22.142180    8853 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 05:01:22.143640    8853 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 05:01:22.143646    8853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/addons for local assets ...
	I1007 05:01:22.143732    8853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/files for local assets ...
	I1007 05:01:22.143876    8853 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem -> 67502.pem in /etc/ssl/certs
	I1007 05:01:22.144041    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 05:01:22.152214    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /etc/ssl/certs/67502.pem (1708 bytes)
	I1007 05:01:22.159150    8853 start.go:296] duration metric: took 51.584334ms for postStartSetup
	I1007 05:01:22.159170    8853 fix.go:56] duration metric: took 20.66130425s for fixHost
	I1007 05:01:22.159240    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:22.159360    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:22.159365    8853 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 05:01:22.220954    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302481.849964171
	
	I1007 05:01:22.220965    8853 fix.go:216] guest clock: 1728302481.849964171
	I1007 05:01:22.220969    8853 fix.go:229] Guest: 2024-10-07 05:01:21.849964171 -0700 PDT Remote: 2024-10-07 05:01:22.159172 -0700 PDT m=+20.806799959 (delta=-309.207829ms)
	I1007 05:01:22.220980    8853 fix.go:200] guest clock delta is within tolerance: -309.207829ms
	I1007 05:01:22.220985    8853 start.go:83] releasing machines lock for "stopped-upgrade-013000", held for 20.723129875s
	I1007 05:01:22.221070    8853 ssh_runner.go:195] Run: cat /version.json
	I1007 05:01:22.221081    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:22.221070    8853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 05:01:22.221106    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	W1007 05:01:22.221611    8853 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51449: connect: connection refused
	I1007 05:01:22.221631    8853 retry.go:31] will retry after 346.353974ms: dial tcp [::1]:51449: connect: connection refused
	W1007 05:01:22.251710    8853 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 05:01:22.251754    8853 ssh_runner.go:195] Run: systemctl --version
	I1007 05:01:22.253560    8853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 05:01:22.255223    8853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 05:01:22.255253    8853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 05:01:22.258480    8853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 05:01:22.263390    8853 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 05:01:22.263398    8853 start.go:495] detecting cgroup driver to use...
	I1007 05:01:22.263472    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:01:22.270564    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 05:01:22.274294    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 05:01:22.277381    8853 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 05:01:22.277411    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 05:01:22.280323    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:01:22.283131    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 05:01:22.286667    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:01:22.290081    8853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 05:01:22.293406    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 05:01:22.296217    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 05:01:22.299043    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 05:01:22.302405    8853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 05:01:22.305700    8853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 05:01:22.308383    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:22.383690    8853 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 05:01:22.390213    8853 start.go:495] detecting cgroup driver to use...
	I1007 05:01:22.390299    8853 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 05:01:22.396226    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:01:22.403034    8853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 05:01:22.409569    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:01:22.414175    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:01:22.418695    8853 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 05:01:22.450794    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:01:22.455830    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:01:22.461166    8853 ssh_runner.go:195] Run: which cri-dockerd
	I1007 05:01:22.462397    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 05:01:22.465467    8853 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 05:01:22.470572    8853 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 05:01:22.529464    8853 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 05:01:22.611155    8853 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 05:01:22.611233    8853 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 05:01:22.616558    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:22.697081    8853 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:01:23.829929    8853 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132828458s)
	I1007 05:01:23.830020    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 05:01:23.835216    8853 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1007 05:01:23.842070    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:01:23.847326    8853 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 05:01:23.928354    8853 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 05:01:24.004286    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:24.085510    8853 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 05:01:24.091135    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:01:24.095993    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:24.173375    8853 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 05:01:24.213017    8853 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 05:01:24.213139    8853 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 05:01:24.214973    8853 start.go:563] Will wait 60s for crictl version
	I1007 05:01:24.215035    8853 ssh_runner.go:195] Run: which crictl
	I1007 05:01:24.216306    8853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 05:01:24.230967    8853 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 05:01:24.231054    8853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:01:24.247528    8853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:01:24.270584    8853 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 05:01:24.270669    8853 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 05:01:24.271918    8853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:01:24.275457    8853 kubeadm.go:883] updating cluster {Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 05:01:24.275501    8853 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:01:24.275548    8853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:01:24.287362    8853 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:01:24.287370    8853 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:01:24.287429    8853 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:01:24.290763    8853 ssh_runner.go:195] Run: which lz4
	I1007 05:01:24.292721    8853 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 05:01:24.293897    8853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 05:01:24.293906    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 05:01:25.303378    8853 docker.go:649] duration metric: took 1.010704292s to copy over tarball
	I1007 05:01:25.303449    8853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 05:01:23.431291    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:23.431419    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:23.445135    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:23.445214    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:23.456616    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:23.456697    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:23.467071    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:23.467147    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:23.477881    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:23.477953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:23.488419    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:23.488495    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:23.499055    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:23.499134    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:23.509088    8424 logs.go:282] 0 containers: []
	W1007 05:01:23.509099    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:23.509168    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:23.520306    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:23.520326    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:23.520331    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:23.537515    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:23.537529    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:23.550676    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:23.550689    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:23.574895    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:23.574906    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:23.579107    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:23.579114    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:23.594667    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:23.594680    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:23.633229    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:23.633245    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:23.647138    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:23.647150    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:23.658970    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:23.658981    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:23.674602    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:23.674614    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:23.686203    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:23.686217    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:23.700688    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:23.700697    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:23.726379    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:23.726398    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:23.747182    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:23.747199    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:23.759975    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:23.759988    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:23.773083    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:23.773095    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:23.814450    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:23.814470    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:26.332118    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:26.485450    8853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.181980917s)
	I1007 05:01:26.485464    8853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 05:01:26.501267    8853 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:01:26.504153    8853 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 05:01:26.508935    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:26.588949    8853 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:01:28.184651    8853 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.595690375s)
	I1007 05:01:28.184744    8853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:01:28.197731    8853 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:01:28.197743    8853 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:01:28.197748    8853 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 05:01:28.201664    8853 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:28.203866    8853 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.205654    8853 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:28.206456    8853 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.208216    8853 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.208330    8853 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.209584    8853 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.209923    8853 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.210949    8853 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.211032    8853 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.212206    8853 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.212324    8853 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:01:28.213471    8853 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:28.213511    8853 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.214597    8853 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:01:28.215063    8853 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:28.742234    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.757686    8853 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 05:01:28.757722    8853 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.757795    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.760179    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.768677    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.768900    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 05:01:28.772536    8853 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 05:01:28.772566    8853 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.772639    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.785781    8853 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 05:01:28.785842    8853 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.785960    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.796485    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1007 05:01:28.797414    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 05:01:28.858209    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.868866    8853 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 05:01:28.868891    8853 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.868964    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.881526    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 05:01:28.895303    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.905971    8853 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 05:01:28.905997    8853 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.906059    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.915792    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 05:01:28.915928    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:01:28.917423    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 05:01:28.917441    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 05:01:28.965758    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 05:01:28.994926    8853 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 05:01:28.994951    8853 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 05:01:28.995022    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 05:01:29.020500    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:01:29.020643    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 05:01:29.023727    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 05:01:29.023750    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W1007 05:01:29.033311    8853 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:01:29.033470    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.037185    8853 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 05:01:29.037204    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1007 05:01:29.046162    8853 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:01:29.046305    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.059251    8853 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 05:01:29.059277    8853 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.059340    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.101535    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1007 05:01:29.105124    8853 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 05:01:29.105148    8853 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.105223    8853 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.113557    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 05:01:29.113713    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:01:29.137805    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 05:01:29.137842    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 05:01:29.140157    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:01:29.140291    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:01:29.161642    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 05:01:29.161678    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 05:01:29.252020    8853 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:01:29.252033    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 05:01:29.310474    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 05:01:29.310523    8853 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:01:29.310530    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 05:01:29.624290    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 05:01:29.624312    8853 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:01:29.624319    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 05:01:29.769857    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 05:01:29.769901    8853 cache_images.go:92] duration metric: took 1.5721495s to LoadCachedImages
	W1007 05:01:29.769943    8853 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1007 05:01:29.769949    8853 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 05:01:29.770000    8853 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-013000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 05:01:29.770088    8853 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 05:01:29.783704    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:01:29.783719    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:01:29.783728    8853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 05:01:29.783738    8853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-013000 NodeName:stopped-upgrade-013000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 05:01:29.783807    8853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-013000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 05:01:29.783886    8853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 05:01:29.787187    8853 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 05:01:29.787230    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 05:01:29.790256    8853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 05:01:29.795314    8853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 05:01:29.800385    8853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 05:01:29.805662    8853 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 05:01:29.806884    8853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:01:29.810794    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:29.890059    8853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:01:29.896813    8853 certs.go:68] Setting up /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000 for IP: 10.0.2.15
	I1007 05:01:29.896822    8853 certs.go:194] generating shared ca certs ...
	I1007 05:01:29.896835    8853 certs.go:226] acquiring lock for ca certs: {Name:mk64252dad53b4f3a87f635894b143f083e9f2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.897023    8853 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key
	I1007 05:01:29.897096    8853 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key
	I1007 05:01:29.897105    8853 certs.go:256] generating profile certs ...
	I1007 05:01:29.897193    8853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key
	I1007 05:01:29.897210    8853 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c
	I1007 05:01:29.897221    8853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 05:01:29.989073    8853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c ...
	I1007 05:01:29.989088    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c: {Name:mkf812314bd83bfbea46a9b7eb7076846ede5d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.989591    8853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c ...
	I1007 05:01:29.989601    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c: {Name:mkd259e9af840cb8b5cfd8c70623cba409e3615b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.989755    8853 certs.go:381] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt
	I1007 05:01:29.989887    8853 certs.go:385] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key
	I1007 05:01:29.990067    8853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.key
	I1007 05:01:29.990217    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem (1338 bytes)
	W1007 05:01:29.990252    8853 certs.go:480] ignoring /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750_empty.pem, impossibly tiny 0 bytes
	I1007 05:01:29.990258    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 05:01:29.990291    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem (1082 bytes)
	I1007 05:01:29.990323    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem (1123 bytes)
	I1007 05:01:29.990354    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem (1679 bytes)
	I1007 05:01:29.990416    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem (1708 bytes)
	I1007 05:01:29.990775    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 05:01:29.998091    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 05:01:30.004976    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 05:01:30.011868    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 05:01:30.018920    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 05:01:30.026353    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 05:01:30.033647    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 05:01:30.040802    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 05:01:30.047857    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /usr/share/ca-certificates/67502.pem (1708 bytes)
	I1007 05:01:30.054783    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 05:01:30.062113    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem --> /usr/share/ca-certificates/6750.pem (1338 bytes)
	I1007 05:01:30.069504    8853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 05:01:30.074736    8853 ssh_runner.go:195] Run: openssl version
	I1007 05:01:30.076723    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67502.pem && ln -fs /usr/share/ca-certificates/67502.pem /etc/ssl/certs/67502.pem"
	I1007 05:01:30.079613    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.081005    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:45 /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.081034    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.082753    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67502.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 05:01:30.085731    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 05:01:30.088683    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.090222    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.090259    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.092226    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 05:01:30.095194    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6750.pem && ln -fs /usr/share/ca-certificates/6750.pem /etc/ssl/certs/6750.pem"
	I1007 05:01:30.098659    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.100216    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:45 /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.100240    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.101989    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6750.pem /etc/ssl/certs/51391683.0"
	I1007 05:01:30.105383    8853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 05:01:30.106858    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 05:01:30.108960    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 05:01:30.110825    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 05:01:30.112829    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 05:01:30.114623    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 05:01:30.116525    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 05:01:30.118418    8853 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:30.118495    8853 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:01:30.128741    8853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 05:01:30.131775    8853 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 05:01:30.131783    8853 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 05:01:30.131812    8853 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 05:01:30.134933    8853 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:01:30.135286    8853 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-013000" does not appear in /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:01:30.135386    8853 kubeconfig.go:62] /Users/jenkins/minikube-integration/19763-6232/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-013000" cluster setting kubeconfig missing "stopped-upgrade-013000" context setting]
	I1007 05:01:30.135583    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:30.136061    8853 kapi.go:59] client config for stopped-upgrade-013000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104147ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:01:30.136415    8853 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 05:01:30.139208    8853 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-013000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 05:01:30.139214    8853 kubeadm.go:1160] stopping kube-system containers ...
	I1007 05:01:30.139265    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:01:30.150472    8853 docker.go:483] Stopping containers: [d5ac2d0f9779 fa15598b25e6 023cc649d91f eb90044e46b6 0e9d10ca462d b8fc485885e4 483f390e6c19 0d30b4d058f2]
	I1007 05:01:30.150544    8853 ssh_runner.go:195] Run: docker stop d5ac2d0f9779 fa15598b25e6 023cc649d91f eb90044e46b6 0e9d10ca462d b8fc485885e4 483f390e6c19 0d30b4d058f2
	I1007 05:01:30.161412    8853 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 05:01:30.166850    8853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:01:30.169896    8853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:01:30.169901    8853 kubeadm.go:157] found existing configuration files:
	
	I1007 05:01:30.169933    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf
	I1007 05:01:30.172396    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:01:30.172429    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:01:30.174991    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf
	I1007 05:01:30.178061    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:01:30.178098    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:01:30.180939    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf
	I1007 05:01:30.183387    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:01:30.183417    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:01:30.186495    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf
	I1007 05:01:30.189472    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:01:30.189515    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:01:30.192269    8853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:01:30.195169    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.217373    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.802734    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.938622    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.964652    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.988440    8853 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:01:30.988532    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.334459    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:31.334622    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:31.349075    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:31.349189    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:31.361568    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:31.361654    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:31.371936    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:31.372009    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:31.382574    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:31.382643    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:31.393248    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:31.393321    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:31.403504    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:31.403591    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:31.413391    8424 logs.go:282] 0 containers: []
	W1007 05:01:31.413404    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:31.413469    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:31.423696    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:31.423718    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:31.423722    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:31.490586    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.990641    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.994836    8853 api_server.go:72] duration metric: took 1.00640025s to wait for apiserver process to appear ...
	I1007 05:01:31.994849    8853 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:01:31.994858    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:31.462393    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:31.462404    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:31.501473    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:31.501489    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:31.514267    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:31.514279    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:31.538405    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:31.538420    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:31.554798    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:31.554811    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:31.569195    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:31.569214    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:31.589160    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:31.589174    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:31.606770    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:31.606785    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:31.611383    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:31.611390    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:31.636613    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:31.636624    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:31.650559    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:31.650570    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:31.666492    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:31.666505    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:31.680594    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:31.680607    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:31.705116    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:31.705135    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:31.730938    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:31.730957    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:31.744539    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:31.744553    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:34.260961    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:36.996952    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:36.997011    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:39.263160    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:39.263322    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:39.274874    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:39.274952    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:39.289325    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:39.289399    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:39.299634    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:39.299708    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:39.311011    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:39.311092    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:39.322131    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:39.322202    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:39.335876    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:39.335952    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:39.346228    8424 logs.go:282] 0 containers: []
	W1007 05:01:39.346241    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:39.346298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:39.357445    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:39.357471    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:39.357477    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:39.371983    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:39.371995    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:39.383625    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:39.383637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:39.401837    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:39.401847    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:39.413396    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:39.413413    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:39.436097    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:39.436106    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:39.472641    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:39.472653    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:39.477230    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:39.477236    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:39.490051    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:39.490063    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:39.501829    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:39.501840    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:39.513636    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:39.513650    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:39.550501    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:39.550512    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:39.564665    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:39.564679    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:39.588218    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:39.588233    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:39.602519    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:39.602528    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:39.617114    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:39.617127    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:39.631182    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:39.631196    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:41.997391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:41.997462    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:42.146142    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:46.998008    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:46.998036    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:47.148478    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:47.148635    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:47.159943    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:47.160025    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:47.170528    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:47.170609    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:47.181689    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:47.181771    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:47.195610    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:47.195684    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:47.206565    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:47.206640    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:47.217526    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:47.217598    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:47.227780    8424 logs.go:282] 0 containers: []
	W1007 05:01:47.227797    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:47.227867    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:47.238656    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:47.238674    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:47.238678    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:47.250787    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:47.250803    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:47.262219    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:47.262233    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:47.273933    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:47.273948    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:47.312254    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:47.312267    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:47.324435    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:47.324446    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:47.343587    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:47.343599    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:47.358810    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:47.358821    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:47.383473    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:47.383483    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:47.395544    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:47.395559    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:47.419147    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:47.419164    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:47.460697    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:47.460714    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:47.479297    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:47.479311    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:47.494650    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:47.494661    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:47.519551    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:47.519566    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:47.529282    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:47.529296    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:47.553937    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:47.553951    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:50.068299    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:51.998592    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:51.998636    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:55.069774    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:55.070090    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:01:55.092213    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:01:55.092326    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:01:55.113870    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:01:55.113953    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:01:55.125401    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:01:55.125480    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:01:55.136636    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:01:55.136714    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:01:55.147390    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:01:55.147468    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:01:55.158157    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:01:55.158236    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:01:55.168597    8424 logs.go:282] 0 containers: []
	W1007 05:01:55.168609    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:01:55.168674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:01:55.179337    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:01:55.179356    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:01:55.179362    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:01:55.195329    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:01:55.195343    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:01:55.209282    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:01:55.209292    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:01:55.220930    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:01:55.220941    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:01:55.232369    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:01:55.232379    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:01:55.249978    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:01:55.249987    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:01:55.263949    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:01:55.263964    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:01:55.288003    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:01:55.288014    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:01:55.305327    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:01:55.305336    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:01:55.316904    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:01:55.316914    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:01:55.328428    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:01:55.328439    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:01:55.352609    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:01:55.352621    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:01:55.364440    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:01:55.364451    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:01:55.402389    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:01:55.402398    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:01:55.406747    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:01:55.406754    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:01:55.442206    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:01:55.442216    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:01:55.456060    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:01:55.456071    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:01:56.999497    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:56.999540    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:57.970219    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:02.000535    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:02.000583    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:02.972508    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:02.972619    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:02.991980    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:02.992074    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:03.003091    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:03.003175    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:03.014205    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:03.014298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:03.025875    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:03.025966    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:03.037317    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:03.037394    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:03.048651    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:03.048724    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:03.059730    8424 logs.go:282] 0 containers: []
	W1007 05:02:03.059742    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:03.059818    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:03.070782    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:03.070803    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:03.070811    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:03.094177    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:03.094186    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:03.113594    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:03.113610    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:03.118588    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:03.118594    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:03.153400    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:03.153410    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:03.167869    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:03.167885    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:03.183051    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:03.183068    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:03.195705    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:03.195718    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:03.207545    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:03.207560    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:03.244849    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:03.244857    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:03.259656    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:03.259668    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:03.271594    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:03.271605    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:03.288969    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:03.288984    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:03.305576    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:03.305588    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:03.329406    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:03.329420    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:03.341110    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:03.341125    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:03.353262    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:03.353273    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:05.869191    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:07.001819    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:07.001866    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:10.871617    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:10.871777    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:10.883416    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:10.883504    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:10.895558    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:10.895640    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:10.906245    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:10.906325    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:10.916594    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:10.916674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:10.927588    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:10.927666    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:10.938214    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:10.938293    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:10.951022    8424 logs.go:282] 0 containers: []
	W1007 05:02:10.951035    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:10.951102    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:10.961453    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:10.961475    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:10.961481    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:10.973561    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:10.973574    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:10.997562    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:10.997573    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:11.011046    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:11.011056    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:11.026705    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:11.026715    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:11.038399    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:11.038411    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:11.061131    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:11.061140    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:11.065774    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:11.065783    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:11.082504    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:11.082518    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:11.094247    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:11.094257    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:11.132269    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:11.132281    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:11.146957    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:11.146971    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:11.161384    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:11.161395    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:11.173804    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:11.173818    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:11.207362    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:11.207378    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:11.221670    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:11.221680    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:11.233250    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:11.233261    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:12.003607    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:12.003648    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:13.752769    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:17.005695    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:17.005733    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:18.755371    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:18.755580    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:18.770001    8424 logs.go:282] 2 containers: [b6fb94e99596 c63f6d43a7c8]
	I1007 05:02:18.770097    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:18.781718    8424 logs.go:282] 2 containers: [aeeec082f968 1dff5d275bc2]
	I1007 05:02:18.781798    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:18.795002    8424 logs.go:282] 1 containers: [9996ef575fb8]
	I1007 05:02:18.795083    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:18.805457    8424 logs.go:282] 2 containers: [3a37f2be4709 a615feede37b]
	I1007 05:02:18.805529    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:18.816198    8424 logs.go:282] 1 containers: [9017213d1fd7]
	I1007 05:02:18.816296    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:18.827118    8424 logs.go:282] 2 containers: [a8fa21bbc1f5 5ced4d1372d9]
	I1007 05:02:18.827201    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:18.841013    8424 logs.go:282] 0 containers: []
	W1007 05:02:18.841023    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:18.841089    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:18.851605    8424 logs.go:282] 2 containers: [51d6d11ea45c 5da4ba4dad6a]
	I1007 05:02:18.851622    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:18.851627    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:18.856480    8424 logs.go:123] Gathering logs for etcd [aeeec082f968] ...
	I1007 05:02:18.856489    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeeec082f968"
	I1007 05:02:18.870452    8424 logs.go:123] Gathering logs for kube-controller-manager [5ced4d1372d9] ...
	I1007 05:02:18.870463    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ced4d1372d9"
	I1007 05:02:18.885142    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:02:18.885152    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:18.899111    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:18.899121    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:18.936622    8424 logs.go:123] Gathering logs for kube-apiserver [b6fb94e99596] ...
	I1007 05:02:18.936637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb94e99596"
	I1007 05:02:18.951347    8424 logs.go:123] Gathering logs for coredns [9996ef575fb8] ...
	I1007 05:02:18.951356    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9996ef575fb8"
	I1007 05:02:18.962876    8424 logs.go:123] Gathering logs for storage-provisioner [51d6d11ea45c] ...
	I1007 05:02:18.962887    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d6d11ea45c"
	I1007 05:02:18.974399    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:18.974413    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:18.997387    8424 logs.go:123] Gathering logs for kube-apiserver [c63f6d43a7c8] ...
	I1007 05:02:18.997395    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c63f6d43a7c8"
	I1007 05:02:19.021440    8424 logs.go:123] Gathering logs for etcd [1dff5d275bc2] ...
	I1007 05:02:19.021452    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dff5d275bc2"
	I1007 05:02:19.036788    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:19.036801    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:19.076059    8424 logs.go:123] Gathering logs for kube-scheduler [3a37f2be4709] ...
	I1007 05:02:19.076069    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a37f2be4709"
	I1007 05:02:19.095719    8424 logs.go:123] Gathering logs for kube-scheduler [a615feede37b] ...
	I1007 05:02:19.095731    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a615feede37b"
	I1007 05:02:19.113243    8424 logs.go:123] Gathering logs for kube-proxy [9017213d1fd7] ...
	I1007 05:02:19.113254    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9017213d1fd7"
	I1007 05:02:19.125038    8424 logs.go:123] Gathering logs for kube-controller-manager [a8fa21bbc1f5] ...
	I1007 05:02:19.125048    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fa21bbc1f5"
	I1007 05:02:19.142726    8424 logs.go:123] Gathering logs for storage-provisioner [5da4ba4dad6a] ...
	I1007 05:02:19.142737    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5da4ba4dad6a"
	I1007 05:02:22.007984    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:22.008032    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:21.656762    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:26.659123    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:26.659238    8424 kubeadm.go:597] duration metric: took 4m4.38473425s to restartPrimaryControlPlane
	W1007 05:02:26.659324    8424 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:02:26.659361    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:02:27.690694    8424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031316625s)
	I1007 05:02:27.690768    8424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:02:27.695925    8424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:02:27.698727    8424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:02:27.701579    8424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:02:27.701585    8424 kubeadm.go:157] found existing configuration files:
	
	I1007 05:02:27.701614    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I1007 05:02:27.704645    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:02:27.704680    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:02:27.707381    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I1007 05:02:27.709854    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:02:27.709886    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:02:27.713349    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I1007 05:02:27.716410    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:02:27.716443    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:02:27.719139    8424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I1007 05:02:27.721974    8424 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:02:27.722001    8424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:02:27.725169    8424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:02:27.743582    8424 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:02:27.743644    8424 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:02:27.792586    8424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:02:27.792675    8424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:02:27.792725    8424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:02:27.846206    8424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:02:27.849554    8424 out.go:235]   - Generating certificates and keys ...
	I1007 05:02:27.849586    8424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:02:27.849614    8424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:02:27.849661    8424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:02:27.849697    8424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:02:27.849738    8424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:02:27.849793    8424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:02:27.849842    8424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:02:27.849877    8424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:02:27.849927    8424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:02:27.849970    8424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:02:27.849990    8424 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:02:27.850018    8424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:02:27.942586    8424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:02:28.217460    8424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:02:28.283996    8424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:02:28.557780    8424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:02:28.585423    8424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:02:28.585833    8424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:02:28.585854    8424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:02:28.673637    8424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:02:27.010324    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:27.010346    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:28.677818    8424 out.go:235]   - Booting up control plane ...
	I1007 05:02:28.677865    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:02:28.677906    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:02:28.677945    8424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:02:28.677988    8424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:02:28.679680    8424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:02:33.185755    8424 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505681 seconds
	I1007 05:02:33.185870    8424 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:02:33.191902    8424 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:02:33.701065    8424 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:02:33.701170    8424 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-802000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:02:34.205909    8424 kubeadm.go:310] [bootstrap-token] Using token: tdjbgm.u9rqa1rmq6a14rbm
	I1007 05:02:34.211588    8424 out.go:235]   - Configuring RBAC rules ...
	I1007 05:02:34.211646    8424 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:02:34.211686    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:02:34.216153    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:02:34.217251    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:02:34.218135    8424 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:02:34.218959    8424 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:02:34.222186    8424 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:02:34.381729    8424 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:02:34.609850    8424 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:02:34.610338    8424 kubeadm.go:310] 
	I1007 05:02:34.610368    8424 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:02:34.610377    8424 kubeadm.go:310] 
	I1007 05:02:34.610422    8424 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:02:34.610425    8424 kubeadm.go:310] 
	I1007 05:02:34.610441    8424 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:02:34.610476    8424 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:02:34.610505    8424 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:02:34.610509    8424 kubeadm.go:310] 
	I1007 05:02:34.610534    8424 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:02:34.610537    8424 kubeadm.go:310] 
	I1007 05:02:34.610563    8424 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:02:34.610570    8424 kubeadm.go:310] 
	I1007 05:02:34.610597    8424 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:02:34.610645    8424 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:02:34.610684    8424 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:02:34.610687    8424 kubeadm.go:310] 
	I1007 05:02:34.610741    8424 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:02:34.610792    8424 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:02:34.610795    8424 kubeadm.go:310] 
	I1007 05:02:34.610856    8424 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tdjbgm.u9rqa1rmq6a14rbm \
	I1007 05:02:34.610913    8424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 \
	I1007 05:02:34.610930    8424 kubeadm.go:310] 	--control-plane 
	I1007 05:02:34.610937    8424 kubeadm.go:310] 
	I1007 05:02:34.610988    8424 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:02:34.610991    8424 kubeadm.go:310] 
	I1007 05:02:34.611031    8424 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tdjbgm.u9rqa1rmq6a14rbm \
	I1007 05:02:34.611080    8424 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 
	I1007 05:02:34.611249    8424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:02:34.611258    8424 cni.go:84] Creating CNI manager for ""
	I1007 05:02:34.611267    8424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:02:34.614429    8424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:02:34.622500    8424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:02:34.625885    8424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:02:34.630928    8424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:02:34.631018    8424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:02:34.631019    8424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-802000 minikube.k8s.io/updated_at=2024_10_07T05_02_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=running-upgrade-802000 minikube.k8s.io/primary=true
	I1007 05:02:34.674475    8424 ops.go:34] apiserver oom_adj: -16
	I1007 05:02:34.674523    8424 kubeadm.go:1113] duration metric: took 43.552417ms to wait for elevateKubeSystemPrivileges
	I1007 05:02:34.674535    8424 kubeadm.go:394] duration metric: took 4m12.414154209s to StartCluster
	I1007 05:02:34.674545    8424 settings.go:142] acquiring lock: {Name:mk5872a0c73b3208924793fa59bf550628bdf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:02:34.674748    8424 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:02:34.675128    8424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:02:34.675321    8424 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:02:34.675334    8424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:02:34.675367    8424 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-802000"
	I1007 05:02:34.675389    8424 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-802000"
	I1007 05:02:34.675405    8424 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-802000"
	I1007 05:02:34.675422    8424 config.go:182] Loaded profile config "running-upgrade-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:02:34.675414    8424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-802000"
	W1007 05:02:34.675423    8424 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:02:34.675484    8424 host.go:66] Checking if "running-upgrade-802000" exists ...
	I1007 05:02:34.679440    8424 out.go:177] * Verifying Kubernetes components...
	I1007 05:02:34.680169    8424 kapi.go:59] client config for running-upgrade-802000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/running-upgrade-802000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10235bae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:02:34.682655    8424 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-802000"
	W1007 05:02:34.682660    8424 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:02:34.682669    8424 host.go:66] Checking if "running-upgrade-802000" exists ...
	I1007 05:02:34.683196    8424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:02:34.683201    8424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:02:34.683206    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 05:02:34.685416    8424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:02:32.011627    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:32.011852    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:32.028106    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:32.028195    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:32.041153    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:32.041238    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:32.052334    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:32.052410    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:32.062689    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:32.062765    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:32.072858    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:32.072930    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:32.083829    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:32.083910    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:32.094031    8853 logs.go:282] 0 containers: []
	W1007 05:02:32.094044    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:32.094118    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:32.108964    8853 logs.go:282] 0 containers: []
	W1007 05:02:32.108977    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:32.108993    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:32.108999    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:32.120383    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:32.120395    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:32.135582    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:32.135592    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:32.174827    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:32.174834    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:32.190299    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:32.190310    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:32.204375    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:32.204387    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:32.216392    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:32.216402    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:32.236844    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:32.236859    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:32.262057    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:32.262067    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:32.365280    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:32.365291    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:32.381068    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:32.381080    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:32.395668    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:32.395680    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:32.411384    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:32.411398    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:32.423630    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:32.423643    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:32.427916    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:32.427923    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:34.957509    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:34.689466    8424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:02:34.695434    8424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:02:34.695441    8424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:02:34.695448    8424 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/running-upgrade-802000/id_rsa Username:docker}
	I1007 05:02:34.789477    8424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:02:34.795411    8424 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:02:34.795474    8424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:02:34.799819    8424 api_server.go:72] duration metric: took 124.485917ms to wait for apiserver process to appear ...
	I1007 05:02:34.799829    8424 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:02:34.799835    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:34.817736    8424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:02:34.841710    8424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:02:35.147347    8424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:02:35.147359    8424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:02:39.959759    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:39.959948    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:39.979701    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:39.979812    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:39.993818    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:39.993912    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:40.005796    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:40.005877    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:40.016381    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:40.016459    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:40.031628    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:40.031706    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:40.042930    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:40.043017    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:40.053531    8853 logs.go:282] 0 containers: []
	W1007 05:02:40.053541    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:40.053605    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:40.063731    8853 logs.go:282] 0 containers: []
	W1007 05:02:40.063744    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:40.063751    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:40.063756    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:40.079559    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:40.079570    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:40.094502    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:40.094512    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:40.105828    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:40.105842    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:40.117897    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:40.117907    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:40.131557    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:40.131566    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:40.170749    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:40.170756    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:40.207661    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:40.207670    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:40.223120    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:40.223128    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:40.247739    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:40.247748    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:40.264951    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:40.264962    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:40.278846    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:40.278855    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:40.302691    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:40.302699    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:40.306620    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:40.306626    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:40.318546    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:40.318556    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:39.801937    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:39.801998    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:42.835413    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:44.802344    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:44.802372    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:47.836684    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:47.836788    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:47.849286    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:47.849372    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:47.861519    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:47.861601    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:47.872955    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:47.873033    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:47.884236    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:47.884328    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:47.899116    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:47.899211    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:47.911252    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:47.911335    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:47.923067    8853 logs.go:282] 0 containers: []
	W1007 05:02:47.923080    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:47.923152    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:47.934883    8853 logs.go:282] 0 containers: []
	W1007 05:02:47.934898    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:47.934906    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:47.934912    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:47.961686    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:47.961708    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:47.974541    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:47.974553    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:47.987952    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:47.987970    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:48.007457    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:48.007470    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:48.012304    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:48.012316    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:48.026821    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:48.026835    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:48.041802    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:48.041817    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:48.054061    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:48.054074    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:48.093614    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:48.093623    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:48.132850    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:48.132869    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:48.148137    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:48.148148    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:48.160713    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:48.160725    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:48.176178    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:48.176192    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:48.190253    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:48.190264    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:50.717669    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:49.802753    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:49.802774    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:55.719853    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:55.720090    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:55.749407    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:55.749516    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:55.765117    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:55.765208    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:55.777549    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:55.777630    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:55.789582    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:55.789666    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:55.800097    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:55.800173    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:55.811053    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:55.811133    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:55.821285    8853 logs.go:282] 0 containers: []
	W1007 05:02:55.821296    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:55.821360    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:55.835050    8853 logs.go:282] 0 containers: []
	W1007 05:02:55.835061    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:55.835069    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:55.835075    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:55.848676    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:55.848685    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:55.860139    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:55.860151    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:55.875261    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:55.875275    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:55.890280    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:55.890290    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:55.907708    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:55.907719    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:55.948642    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:55.948649    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:55.962798    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:55.962808    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:55.988268    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:55.988276    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:56.015571    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:56.015583    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:56.050844    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:56.050856    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:56.069617    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:56.069626    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:56.081511    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:56.081521    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:56.099392    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:56.099408    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:56.104358    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:56.104367    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:54.803189    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:54.803214    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:58.621261    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:59.803758    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:59.803788    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:04.804505    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:04.804548    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:03:05.149740    8424 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:03:05.154736    8424 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:03:03.621688    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:03.622008    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:03.649397    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:03.649534    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:03.667025    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:03.667131    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:03.681274    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:03.681349    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:03.693506    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:03.693589    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:03.705373    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:03.705452    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:03.716350    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:03.716425    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:03.726792    8853 logs.go:282] 0 containers: []
	W1007 05:03:03.726806    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:03.726874    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:03.737025    8853 logs.go:282] 0 containers: []
	W1007 05:03:03.737037    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:03.737045    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:03.737051    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:03.751262    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:03.751278    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:03.770607    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:03.770616    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:03.782183    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:03.782192    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:03.818063    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:03.818074    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:03.832876    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:03.832887    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:03.845194    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:03.845204    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:03.860081    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:03.860093    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:03.874442    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:03.874453    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:03.879323    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:03.879333    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:03.905177    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:03.905190    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:03.931127    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:03.931143    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:03.942990    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:03.943000    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:03.983298    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:03.983308    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:04.009041    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:04.009049    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:05.163598    8424 addons.go:510] duration metric: took 30.488348958s for enable addons: enabled=[storage-provisioner]
	I1007 05:03:06.535588    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:09.805509    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:09.805581    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:11.537030    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:11.537580    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:11.573329    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:11.573487    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:11.595035    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:11.595153    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:11.615540    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:11.615621    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:11.627028    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:11.627112    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:11.638084    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:11.638167    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:11.652286    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:11.652369    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:11.663395    8853 logs.go:282] 0 containers: []
	W1007 05:03:11.663411    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:11.663481    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:11.676413    8853 logs.go:282] 0 containers: []
	W1007 05:03:11.676425    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:11.676432    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:11.676438    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:11.702079    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:11.702088    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:11.714321    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:11.714333    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:11.728866    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:11.728880    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:11.767790    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:11.767799    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:11.782073    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:11.782084    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:11.794170    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:11.794184    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:11.809964    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:11.809976    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:11.825064    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:11.825076    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:11.843090    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:11.843105    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:11.855272    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:11.855288    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:11.859448    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:11.859455    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:11.894864    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:11.894878    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:11.916500    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:11.916514    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:11.928195    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:11.928206    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:14.455155    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:14.806940    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:14.806980    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:19.457861    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:19.458136    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:19.482583    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:19.482694    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:19.507005    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:19.507093    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:19.518036    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:19.518109    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:19.528525    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:19.528606    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:19.538800    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:19.538881    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:19.549672    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:19.549746    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:19.560499    8853 logs.go:282] 0 containers: []
	W1007 05:03:19.560512    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:19.560582    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:19.571086    8853 logs.go:282] 0 containers: []
	W1007 05:03:19.571097    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:19.571107    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:19.571112    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:19.596528    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:19.596538    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:19.608170    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:19.608184    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:19.620319    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:19.620329    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:19.633095    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:19.633106    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:19.647425    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:19.647434    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:19.661150    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:19.661161    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:19.676121    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:19.676129    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:19.693455    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:19.693466    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:19.708262    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:19.708271    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:19.746875    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:19.746890    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:19.762756    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:19.762771    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:19.785880    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:19.785886    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:19.824614    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:19.824625    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:19.829440    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:19.829447    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:19.807626    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:19.807650    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:22.346019    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:24.809338    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:24.809403    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:27.347791    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:27.347970    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:27.364596    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:27.364682    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:27.376262    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:27.376342    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:27.386614    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:27.386685    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:27.398017    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:27.398095    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:27.408736    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:27.408808    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:27.419395    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:27.419478    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:27.429603    8853 logs.go:282] 0 containers: []
	W1007 05:03:27.429618    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:27.429685    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:27.439923    8853 logs.go:282] 0 containers: []
	W1007 05:03:27.439934    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:27.439943    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:27.439949    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:27.452491    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:27.452503    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:27.456951    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:27.456958    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:27.468701    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:27.468711    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:27.480370    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:27.480380    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:27.497717    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:27.497726    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:27.510850    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:27.510859    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:27.534176    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:27.534186    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:27.569936    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:27.569947    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:27.583825    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:27.583837    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:27.599335    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:27.599346    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:27.610856    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:27.610868    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:27.626095    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:27.626105    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:27.663541    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:27.663550    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:27.677270    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:27.677284    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:30.204691    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:29.811561    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:29.811592    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:35.206950    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:35.207065    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:35.217975    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:35.218043    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:35.229808    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:35.229888    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:35.240958    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:35.241037    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:35.251482    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:35.251554    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:35.262384    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:35.262460    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:35.272873    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:35.272957    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:35.282742    8853 logs.go:282] 0 containers: []
	W1007 05:03:35.282755    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:35.282816    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:35.293229    8853 logs.go:282] 0 containers: []
	W1007 05:03:35.293241    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:35.293251    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:35.293257    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:35.307168    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:35.307183    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:35.319605    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:35.319616    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:35.344506    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:35.344514    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:35.348826    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:35.348834    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:35.383776    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:35.383785    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:35.395782    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:35.395791    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:35.413863    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:35.413875    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:35.428084    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:35.428096    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:35.442195    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:35.442205    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:35.456822    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:35.456831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:35.468445    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:35.468457    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:35.483213    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:35.483223    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:35.520521    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:35.520530    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:35.544956    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:35.544968    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:34.813808    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:34.813976    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:34.825874    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:34.825943    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:34.836478    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:34.836557    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:34.847514    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:34.847588    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:34.858033    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:34.858101    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:34.868885    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:34.868959    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:34.880256    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:34.880329    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:34.893664    8424 logs.go:282] 0 containers: []
	W1007 05:03:34.893677    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:34.893740    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:34.904126    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:34.904148    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:34.904154    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:34.917826    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:34.917836    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:34.923008    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:34.923019    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:34.958066    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:34.958077    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:34.969867    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:34.969878    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:34.981294    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:34.981304    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:34.996230    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:34.996240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:35.008106    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:35.008116    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:35.032778    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:35.032786    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:35.065128    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:35.065135    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:35.079637    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:35.079647    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:35.093799    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:35.093810    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:35.106475    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:35.106486    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:38.058980    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:37.625972    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:43.060676    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:43.060827    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:43.073519    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:43.073603    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:43.084555    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:43.084652    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:43.095339    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:43.095423    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:43.107828    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:43.107909    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:43.123806    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:43.123880    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:43.133952    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:43.134014    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:43.144183    8853 logs.go:282] 0 containers: []
	W1007 05:03:43.144195    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:43.144259    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:43.154001    8853 logs.go:282] 0 containers: []
	W1007 05:03:43.154015    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:43.154022    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:43.154029    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:43.168915    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:43.168925    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:43.193982    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:43.193994    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:43.198610    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:43.198618    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:43.212889    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:43.212899    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:43.227473    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:43.227482    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:43.238855    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:43.238866    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:43.252627    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:43.252641    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:43.291526    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:43.291533    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:43.325438    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:43.325448    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:43.339687    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:43.339697    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:43.356602    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:43.356611    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:43.369471    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:43.369483    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:43.394631    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:43.394640    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:43.406242    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:43.406254    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:45.920187    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:42.628620    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:42.628897    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:42.647253    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:42.647344    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:42.660808    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:42.660895    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:42.671893    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:42.671969    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:42.682289    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:42.682370    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:42.692907    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:42.692986    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:42.703679    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:42.703759    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:42.717246    8424 logs.go:282] 0 containers: []
	W1007 05:03:42.717258    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:42.717321    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:42.727715    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:42.727730    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:42.727737    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:42.742901    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:42.742916    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:42.754939    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:42.754955    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:42.789846    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:42.789853    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:42.794488    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:42.794494    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:42.830894    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:42.830910    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:42.845378    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:42.845390    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:42.857302    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:42.857313    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:42.868340    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:42.868354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:42.893430    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:42.893442    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:42.905782    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:42.905799    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:42.919755    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:42.919769    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:42.934556    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:42.934566    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:45.454662    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:50.922518    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:50.922646    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:50.933665    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:50.933734    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:50.944652    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:50.944736    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:50.955347    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:50.955431    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:50.965754    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:50.965834    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:50.979371    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:50.979451    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:50.989951    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:50.990019    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:51.000462    8853 logs.go:282] 0 containers: []
	W1007 05:03:51.000475    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:51.000551    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:51.011237    8853 logs.go:282] 0 containers: []
	W1007 05:03:51.011249    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:51.011257    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:51.011262    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:51.024934    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:51.024944    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:51.041499    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:51.041522    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:51.053973    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:51.053985    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:51.084912    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:51.084927    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:51.096611    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:51.096620    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:51.108531    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:51.108542    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:51.125918    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:51.125928    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:51.139586    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:51.139597    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:51.143625    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:51.143636    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:51.158790    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:51.158801    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:51.172622    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:51.172634    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:51.184209    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:51.184219    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:51.207588    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:51.207601    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:51.247720    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:51.247729    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:50.457052    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:50.457322    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:50.484189    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:50.484340    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:50.501199    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:50.501287    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:50.514759    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:50.514859    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:50.526676    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:50.526750    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:50.537187    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:50.537268    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:50.547915    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:50.547993    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:50.557579    8424 logs.go:282] 0 containers: []
	W1007 05:03:50.557590    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:50.557650    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:50.568079    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:50.568094    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:50.568099    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:50.586738    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:50.586752    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:50.598844    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:50.598854    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:50.610347    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:50.610358    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:50.628687    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:50.628700    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:50.633286    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:50.633292    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:50.647262    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:50.647272    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:03:50.662816    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:50.662832    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:50.682620    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:50.682631    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:50.694410    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:50.694425    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:50.720531    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:50.720541    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:50.731779    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:50.731790    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:50.767214    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:50.767222    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:53.784167    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:53.343727    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:58.786490    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:58.786639    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:58.797457    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:58.797544    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:58.808252    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:58.808332    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:58.819048    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:58.819129    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:58.829501    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:58.829574    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:58.839871    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:58.839949    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:58.850624    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:58.850700    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:58.860914    8853 logs.go:282] 0 containers: []
	W1007 05:03:58.860925    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:58.860985    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:58.871190    8853 logs.go:282] 0 containers: []
	W1007 05:03:58.871205    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:58.871212    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:58.871217    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:58.911184    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:58.911195    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:58.925687    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:58.925698    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:58.951763    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:58.951774    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:58.963216    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:58.963232    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:58.977209    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:58.977223    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:59.019974    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:59.019989    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:59.024152    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:59.024158    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:59.038664    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:59.038674    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:59.050566    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:59.050577    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:59.062670    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:59.062685    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:59.076550    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:59.076564    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:59.093651    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:59.093667    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:59.118920    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:59.118931    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:59.132784    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:59.132798    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:58.346073    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:58.346428    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:58.376989    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:03:58.377125    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:58.399027    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:03:58.399111    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:58.417079    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:03:58.417158    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:58.427867    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:03:58.427934    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:58.438656    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:03:58.438721    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:58.449802    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:03:58.449880    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:58.461812    8424 logs.go:282] 0 containers: []
	W1007 05:03:58.461824    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:58.461890    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:58.472243    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:03:58.472260    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:03:58.472266    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:03:58.484184    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:03:58.484200    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:03:58.495495    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:03:58.495508    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:03:58.507563    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:03:58.507574    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:03:58.519063    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:58.519074    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:58.524037    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:58.524045    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:58.564486    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:03:58.564497    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:03:58.578906    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:03:58.578916    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:03:58.599049    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:58.599060    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:58.625301    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:03:58.625329    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:58.636445    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:58.636456    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:58.672047    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:03:58.672057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:03:58.689879    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:03:58.689891    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:01.206944    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:01.646787    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:06.209310    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:06.209550    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:06.230596    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:06.230690    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:06.243358    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:06.243438    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:06.254385    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:06.254459    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:06.264687    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:06.264765    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:06.274951    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:06.275036    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:06.285068    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:06.285140    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:06.295127    8424 logs.go:282] 0 containers: []
	W1007 05:04:06.295139    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:06.295214    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:06.305674    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:06.305691    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:06.305696    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:06.323207    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:06.323217    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:06.347745    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:06.347755    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:06.383217    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:06.383226    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:06.398440    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:06.398449    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:06.424792    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:06.424805    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:06.647252    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:06.647435    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:06.660252    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:06.660331    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:06.672285    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:06.672368    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:06.682863    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:06.682945    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:06.694370    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:06.694450    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:06.705288    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:06.705357    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:06.715658    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:06.715736    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:06.726213    8853 logs.go:282] 0 containers: []
	W1007 05:04:06.726225    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:06.726298    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:06.740251    8853 logs.go:282] 0 containers: []
	W1007 05:04:06.740262    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:06.740271    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:06.740276    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:06.779588    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:06.779595    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:06.783735    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:06.783744    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:06.817684    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:06.817694    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:06.846356    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:06.846367    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:06.860879    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:06.860896    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:06.874645    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:06.874660    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:06.886898    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:06.886909    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:06.902198    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:06.902208    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:06.913423    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:06.913436    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:06.925892    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:06.925902    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:06.937185    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:06.937196    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:06.956027    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:06.956037    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:06.971114    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:06.971123    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:06.988815    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:06.988829    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:09.514143    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:06.437201    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:06.437209    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:06.455586    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:06.455597    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:06.469834    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:06.469850    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:06.482453    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:06.482464    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:06.516155    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:06.516163    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:06.520835    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:06.520845    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:06.535181    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:06.535190    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:09.048971    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:14.516391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:14.516558    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:14.528766    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:14.528844    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:14.539089    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:14.539166    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:14.548979    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:14.549061    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:14.559497    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:14.559576    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:14.573030    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:14.573106    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:14.584028    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:14.584106    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:14.594660    8853 logs.go:282] 0 containers: []
	W1007 05:04:14.594672    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:14.594737    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:14.605101    8853 logs.go:282] 0 containers: []
	W1007 05:04:14.605115    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:14.605122    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:14.605128    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:14.625683    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:14.625693    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:14.643593    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:14.643604    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:14.656469    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:14.656482    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:14.684162    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:14.684181    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:14.698445    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:14.698456    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:14.710033    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:14.710046    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:14.723943    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:14.723958    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:14.749455    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:14.749462    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:14.786938    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:14.786945    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:14.821785    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:14.821796    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:14.837027    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:14.837040    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:14.841898    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:14.841906    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:14.856873    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:14.856884    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:14.869072    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:14.869083    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:14.051197    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:14.051409    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:14.065648    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:14.065743    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:14.077023    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:14.077100    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:14.087724    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:14.087801    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:14.099127    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:14.099206    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:14.110516    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:14.110598    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:14.121689    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:14.121762    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:14.131852    8424 logs.go:282] 0 containers: []
	W1007 05:04:14.131865    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:14.131929    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:14.142024    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:14.142040    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:14.142045    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:14.156296    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:14.156310    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:14.167653    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:14.167663    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:14.189390    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:14.189400    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:14.213146    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:14.213155    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:14.246815    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:14.246824    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:14.283026    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:14.283037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:14.294886    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:14.294898    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:14.309959    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:14.309972    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:14.321234    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:14.321243    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:14.332853    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:14.332867    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:14.343901    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:14.343914    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:14.348509    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:14.348515    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:17.385378    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:16.864360    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:22.387661    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:22.387827    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:22.404039    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:22.404134    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:22.416835    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:22.416917    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:22.427564    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:22.427641    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:22.438337    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:22.438416    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:22.452429    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:22.452505    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:22.467733    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:22.467809    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:22.477593    8853 logs.go:282] 0 containers: []
	W1007 05:04:22.477606    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:22.477672    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:22.487518    8853 logs.go:282] 0 containers: []
	W1007 05:04:22.487534    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:22.487544    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:22.487550    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:22.527276    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:22.527285    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:22.544783    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:22.544794    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:22.556367    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:22.556378    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:22.572927    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:22.572941    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:22.590129    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:22.590141    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:22.615090    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:22.615098    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:22.652478    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:22.652489    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:22.669106    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:22.669120    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:22.683903    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:22.683912    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:22.701841    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:22.701857    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:22.715858    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:22.715874    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:22.720428    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:22.720435    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:22.746221    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:22.746233    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:22.762316    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:22.762329    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:25.275909    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:21.866637    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:21.866832    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:21.881221    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:21.881308    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:21.891756    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:21.891841    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:21.902956    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:21.903031    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:21.913603    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:21.913674    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:21.923864    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:21.923950    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:21.935182    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:21.935253    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:21.953382    8424 logs.go:282] 0 containers: []
	W1007 05:04:21.953392    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:21.953451    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:21.963676    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:21.963689    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:21.963694    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:21.996630    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:21.996637    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:22.008192    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:22.008202    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:22.023062    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:22.023073    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:22.046421    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:22.046427    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:22.068686    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:22.068699    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:22.072996    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:22.073004    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:22.132706    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:22.132716    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:22.147242    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:22.147253    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:22.161461    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:22.161475    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:22.173156    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:22.173168    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:22.187863    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:22.187877    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:22.199606    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:22.199616    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:24.712150    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:30.278214    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:30.278379    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:30.289542    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:30.289636    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:30.300610    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:30.300689    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:30.311966    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:30.312043    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:30.325315    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:30.325397    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:30.339298    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:30.339378    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:30.350261    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:30.350341    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:30.360592    8853 logs.go:282] 0 containers: []
	W1007 05:04:30.360604    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:30.360679    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:30.371125    8853 logs.go:282] 0 containers: []
	W1007 05:04:30.371136    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:30.371165    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:30.371170    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:30.375381    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:30.375390    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:30.389923    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:30.389937    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:30.401900    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:30.401911    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:30.425680    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:30.425690    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:30.451453    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:30.451467    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:30.465713    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:30.465728    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:30.483856    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:30.483868    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:30.496390    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:30.496406    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:30.531135    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:30.531150    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:30.547231    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:30.547244    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:30.560984    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:30.560997    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:30.601074    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:30.601089    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:30.616979    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:30.616991    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:30.636138    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:30.636150    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:29.713974    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:29.714222    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:29.733624    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:29.733723    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:29.748893    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:29.748977    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:29.761650    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:29.761733    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:29.777434    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:29.777516    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:29.790438    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:29.790532    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:29.802229    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:29.802308    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:29.812775    8424 logs.go:282] 0 containers: []
	W1007 05:04:29.812786    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:29.812855    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:29.823313    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:29.823328    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:29.823336    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:29.858151    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:29.858163    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:29.872657    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:29.872672    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:29.886035    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:29.886046    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:29.898143    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:29.898154    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:29.917941    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:29.917950    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:29.933316    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:29.933324    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:29.957624    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:29.957632    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:29.990184    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:29.990192    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:30.002051    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:30.002066    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:30.018008    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:30.018022    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:30.035660    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:30.035673    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:30.047228    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:30.047243    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:33.158345    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:32.554109    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:38.160727    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:38.160909    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:38.176303    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:38.176395    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:38.188914    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:38.188993    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:38.202417    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:38.202484    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:38.214849    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:38.214929    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:38.224886    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:38.224967    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:38.235289    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:38.235374    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:38.245673    8853 logs.go:282] 0 containers: []
	W1007 05:04:38.245686    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:38.245751    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:38.261626    8853 logs.go:282] 0 containers: []
	W1007 05:04:38.261639    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:38.261648    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:38.261653    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:38.285978    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:38.285986    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:38.299002    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:38.299013    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:38.310408    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:38.310418    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:38.323875    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:38.323885    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:38.335881    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:38.335892    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:38.340428    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:38.340435    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:38.374560    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:38.374575    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:38.388779    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:38.388789    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:38.402755    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:38.402764    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:38.416855    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:38.416865    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:38.429466    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:38.429483    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:38.444400    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:38.444410    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:38.465249    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:38.465262    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:38.506476    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:38.506485    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:41.033685    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:37.556356    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:37.556611    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:37.581728    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:37.581831    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:37.595096    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:37.595177    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:37.605897    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:37.605969    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:37.616741    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:37.616818    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:37.627438    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:37.627518    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:37.641307    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:37.641422    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:37.652432    8424 logs.go:282] 0 containers: []
	W1007 05:04:37.652448    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:37.652519    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:37.663349    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:37.663366    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:37.663372    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:37.677921    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:37.677930    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:37.689866    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:37.689877    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:37.702191    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:37.702201    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:37.717047    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:37.717055    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:37.728962    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:37.728973    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:37.752913    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:37.752921    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:37.787285    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:37.787301    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:37.792139    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:37.792144    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:37.806638    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:37.806651    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:37.823792    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:37.823801    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:37.839882    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:37.839895    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:37.854114    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:37.854127    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:40.389427    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:46.034344    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:46.034526    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:46.051683    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:46.051779    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:46.066663    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:46.066752    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:46.079099    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:46.079181    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:46.089823    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:46.089904    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:46.100342    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:46.100419    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:46.110899    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:46.110980    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:46.127565    8853 logs.go:282] 0 containers: []
	W1007 05:04:46.127577    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:46.127649    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:46.138126    8853 logs.go:282] 0 containers: []
	W1007 05:04:46.138137    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:46.138146    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:46.138152    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:46.155520    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:46.155530    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:46.166831    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:46.166840    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:46.178184    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:46.178198    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:46.195354    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:46.195366    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:46.231975    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:46.231989    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:46.247050    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:46.247061    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:46.262259    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:46.262273    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:46.284288    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:46.284296    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:46.296235    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:46.296245    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:46.334853    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:46.334864    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:46.349771    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:46.349783    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:46.354152    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:46.354159    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:46.378776    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:46.378787    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:45.391969    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:45.392207    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:45.409028    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:45.409130    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:45.421319    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:45.421394    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:45.431949    8424 logs.go:282] 2 containers: [447efd5b173e dcd1d90b7fbb]
	I1007 05:04:45.432029    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:45.442637    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:45.442718    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:45.453259    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:45.453344    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:45.463281    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:45.463348    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:45.474412    8424 logs.go:282] 0 containers: []
	W1007 05:04:45.474425    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:45.474491    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:45.486336    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:45.486352    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:45.486358    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:45.491319    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:45.491326    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:45.527765    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:45.527777    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:45.542470    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:45.542484    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:45.556000    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:45.556013    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:45.573424    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:45.573435    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:45.590281    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:45.590292    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:45.625722    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:45.625730    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:45.638438    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:45.638450    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:45.667528    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:45.667539    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:45.702642    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:45.702655    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:45.724753    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:45.724768    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:45.756568    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:45.756592    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:46.394030    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:46.394042    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:48.909496    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:48.284347    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:53.910291    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:53.910408    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:53.923294    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:53.923378    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:53.935502    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:53.935575    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:53.953916    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:53.953995    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:53.964003    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:53.964075    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:53.976014    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:53.976089    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:53.986441    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:53.986520    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:53.996589    8853 logs.go:282] 0 containers: []
	W1007 05:04:53.996601    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:53.996664    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:54.007033    8853 logs.go:282] 0 containers: []
	W1007 05:04:54.007045    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:54.007052    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:54.007059    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:54.047435    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:54.047450    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:54.072536    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:54.072548    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:54.097819    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:54.097831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:54.113349    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:54.113362    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:54.136901    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:54.136908    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:54.148515    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:54.148526    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:54.160513    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:54.160525    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:54.178445    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:54.178458    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:54.196478    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:54.196491    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:54.200821    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:54.200828    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:54.235819    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:54.235831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:54.250557    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:54.250572    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:54.267959    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:54.267968    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:54.287042    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:54.287051    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:53.285068    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:53.285298    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:53.304287    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:04:53.304390    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:53.317691    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:04:53.317768    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:53.329476    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:04:53.329552    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:53.340300    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:04:53.340378    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:53.350302    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:04:53.350378    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:53.361029    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:04:53.361095    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:53.371190    8424 logs.go:282] 0 containers: []
	W1007 05:04:53.371202    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:53.371273    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:53.381288    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:04:53.381307    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:53.381313    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:53.385888    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:04:53.385894    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:04:53.400241    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:04:53.400252    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:04:53.412084    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:04:53.412096    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:04:53.423664    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:04:53.423674    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:04:53.435012    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:53.435023    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:53.473739    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:04:53.473750    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:04:53.485477    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:04:53.485488    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:04:53.497760    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:04:53.497770    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:53.510493    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:04:53.510504    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:04:53.530009    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:04:53.530019    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:04:53.547299    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:53.547309    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:53.580970    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:04:53.580980    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:04:53.596234    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:04:53.596244    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:04:53.607663    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:53.607675    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:56.135125    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:56.803981    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:01.136145    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:01.136364    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:01.154141    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:01.154241    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:01.167534    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:01.167621    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:01.178555    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:01.178633    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:01.190363    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:01.190441    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:01.200811    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:01.200899    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:01.211178    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:01.211254    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:01.221553    8424 logs.go:282] 0 containers: []
	W1007 05:05:01.221564    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:01.221629    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:01.232303    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:01.232323    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:01.232328    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:01.268370    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:01.268386    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:01.292045    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:01.292057    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:01.307127    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:01.307142    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:01.319155    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:01.319166    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:01.337097    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:01.337108    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:01.348976    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:01.348986    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:01.383398    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:01.383409    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:01.387915    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:01.387920    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:01.408154    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:01.408166    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:01.422759    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:01.422771    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:01.805375    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:01.805488    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:01.822095    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:01.822185    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:01.832843    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:01.832920    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:01.843264    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:01.843341    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:01.854001    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:01.854071    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:01.867913    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:01.867995    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:01.882771    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:01.882844    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:01.894906    8853 logs.go:282] 0 containers: []
	W1007 05:05:01.894918    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:01.894979    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:01.905490    8853 logs.go:282] 0 containers: []
	W1007 05:05:01.905500    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:01.905508    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:01.905513    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:01.930038    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:01.930049    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:01.944390    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:01.944405    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:01.959402    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:01.959411    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:01.976548    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:01.976559    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:01.990726    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:01.990737    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:01.995005    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:01.995011    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:02.009372    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:02.009381    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:02.024744    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:02.024754    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:02.064612    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:02.064622    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:02.076781    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:02.076790    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:02.112676    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:02.112688    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:02.126904    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:02.126917    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:02.142539    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:02.142549    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:02.166735    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:02.166742    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:04.680597    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:01.435227    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:01.435237    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:01.447664    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:01.447677    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:01.459148    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:01.459159    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:01.473014    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:01.473024    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:04.000241    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:09.682904    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:09.683109    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:09.700514    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:09.700605    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:09.713474    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:09.713565    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:09.724384    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:09.724461    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:09.735205    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:09.735288    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:09.745297    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:09.745383    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:09.755913    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:09.755986    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:09.766237    8853 logs.go:282] 0 containers: []
	W1007 05:05:09.766248    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:09.766309    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:09.776621    8853 logs.go:282] 0 containers: []
	W1007 05:05:09.776631    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:09.776638    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:09.776644    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:09.792564    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:09.792576    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:09.810459    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:09.810470    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:09.827101    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:09.827112    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:09.838687    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:09.838698    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:09.877653    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:09.877662    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:09.892079    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:09.892090    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:09.916446    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:09.916461    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:09.928235    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:09.928245    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:09.942829    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:09.942840    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:09.954907    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:09.954925    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:09.969056    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:09.969065    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:09.980972    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:09.980981    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:10.003927    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:10.003934    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:10.007986    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:10.007993    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:09.002572    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:09.002789    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:09.016635    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:09.016727    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:09.027592    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:09.027670    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:09.038719    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:09.038802    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:09.049398    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:09.049485    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:09.060538    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:09.060612    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:09.070825    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:09.070978    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:09.080954    8424 logs.go:282] 0 containers: []
	W1007 05:05:09.080964    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:09.081021    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:09.091269    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:09.091286    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:09.091300    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:09.096062    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:09.096076    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:09.108751    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:09.108764    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:09.120883    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:09.120895    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:09.133086    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:09.133102    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:09.147247    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:09.147262    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:09.159052    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:09.159067    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:09.170197    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:09.170208    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:09.195480    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:09.195487    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:09.230748    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:09.230756    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:09.245078    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:09.245087    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:09.263506    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:09.263515    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:09.298978    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:09.298992    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:09.311124    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:09.311134    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:09.329850    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:09.329861    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:12.545203    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:11.845911    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:17.547455    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:17.547563    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:17.558834    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:17.558918    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:17.570797    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:17.570873    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:17.582245    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:17.582334    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:17.594080    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:17.594163    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:17.604485    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:17.604569    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:17.615393    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:17.615465    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:17.625531    8853 logs.go:282] 0 containers: []
	W1007 05:05:17.625543    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:17.625612    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:17.636308    8853 logs.go:282] 0 containers: []
	W1007 05:05:17.636317    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:17.636324    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:17.636330    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:17.650265    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:17.650280    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:17.664391    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:17.664400    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:17.681308    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:17.681318    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:17.695089    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:17.695100    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:17.706937    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:17.706948    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:17.745623    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:17.745635    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:17.780559    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:17.780570    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:17.792068    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:17.792081    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:17.804506    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:17.804521    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:17.827283    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:17.827291    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:17.831322    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:17.831332    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:17.846305    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:17.846320    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:17.871994    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:17.872005    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:17.883648    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:17.883660    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:20.401093    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:16.848189    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:16.848314    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:16.860802    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:16.860890    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:16.871686    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:16.871765    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:16.883679    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:16.883771    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:16.894502    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:16.894579    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:16.905742    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:16.905816    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:16.916209    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:16.916295    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:16.927003    8424 logs.go:282] 0 containers: []
	W1007 05:05:16.927018    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:16.927083    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:16.937945    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:16.937963    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:16.937970    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:16.943098    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:16.943106    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:16.977585    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:16.977599    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:16.992842    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:16.992856    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:17.009344    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:17.009354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:17.034582    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:17.034590    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:17.069130    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:17.069138    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:17.080433    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:17.080448    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:17.092398    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:17.092408    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:17.109309    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:17.109320    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:17.121295    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:17.121310    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:17.133165    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:17.133181    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:17.154944    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:17.154954    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:17.174995    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:17.175008    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:17.187495    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:17.187505    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:19.701826    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:25.403389    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:25.403592    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:25.439164    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:25.439254    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:25.458038    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:25.458124    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:25.468386    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:25.468462    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:25.486417    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:25.486500    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:25.496895    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:25.496978    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:25.507463    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:25.507540    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:25.517733    8853 logs.go:282] 0 containers: []
	W1007 05:05:25.517742    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:25.517801    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:25.528027    8853 logs.go:282] 0 containers: []
	W1007 05:05:25.528040    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:25.528049    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:25.528054    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:25.564723    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:25.564733    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:25.579465    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:25.579476    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:25.591184    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:25.591195    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:25.615711    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:25.615722    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:25.633830    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:25.633840    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:25.656414    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:25.656421    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:25.680618    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:25.680628    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:25.694512    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:25.694527    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:25.698769    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:25.698776    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:25.712722    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:25.712732    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:25.728706    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:25.728717    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:25.740345    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:25.740356    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:25.783316    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:25.783329    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:25.795236    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:25.795250    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:24.702397    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:24.702637    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:24.732265    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:24.732358    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:24.744856    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:24.744930    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:24.757687    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:24.757770    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:24.769462    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:24.769552    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:24.780101    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:24.780175    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:24.790552    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:24.790633    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:24.800841    8424 logs.go:282] 0 containers: []
	W1007 05:05:24.800854    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:24.800918    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:24.810993    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:24.811010    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:24.811016    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:24.830615    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:24.830627    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:24.842244    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:24.842255    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:24.864070    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:24.864086    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:24.875810    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:24.875826    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:24.895036    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:24.895048    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:24.907407    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:24.907424    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:24.932442    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:24.932451    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:24.966664    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:24.966677    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:24.983433    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:24.983447    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:24.995098    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:24.995109    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:25.010215    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:25.010224    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:25.045125    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:25.045134    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:25.049271    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:25.049278    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:25.064600    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:25.064611    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:28.308951    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:27.578760    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:33.311191    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:33.311269    8853 kubeadm.go:597] duration metric: took 4m3.18020125s to restartPrimaryControlPlane
	W1007 05:05:33.311331    8853 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:05:33.311356    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:05:34.250392    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:05:34.255564    8853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:05:34.258676    8853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:05:34.261578    8853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:05:34.261583    8853 kubeadm.go:157] found existing configuration files:
	
	I1007 05:05:34.261616    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf
	I1007 05:05:34.264722    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:05:34.264749    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:05:34.268159    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf
	I1007 05:05:34.271254    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:05:34.271289    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:05:34.273875    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf
	I1007 05:05:34.276955    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:05:34.276988    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:05:34.280282    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf
	I1007 05:05:34.282877    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:05:34.282906    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:05:34.285620    8853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:05:34.302727    8853 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:05:34.302825    8853 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:05:34.351290    8853 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:05:34.351344    8853 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:05:34.351402    8853 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:05:34.401810    8853 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:05:34.405082    8853 out.go:235]   - Generating certificates and keys ...
	I1007 05:05:34.405115    8853 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:05:34.405141    8853 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:05:34.405184    8853 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:05:34.405288    8853 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:05:34.405322    8853 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:05:34.405352    8853 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:05:34.405420    8853 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:05:34.405461    8853 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:05:34.405498    8853 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:05:34.405533    8853 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:05:34.405590    8853 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:05:34.405654    8853 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:05:34.542118    8853 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:05:34.652636    8853 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:05:34.715263    8853 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:05:34.790620    8853 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:05:34.821219    8853 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:05:34.821568    8853 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:05:34.821629    8853 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:05:34.915201    8853 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:05:34.919587    8853 out.go:235]   - Booting up control plane ...
	I1007 05:05:34.919641    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:05:34.919688    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:05:34.919719    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:05:34.919757    8853 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:05:34.919843    8853 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:05:32.581173    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:32.581353    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:32.593526    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:32.593618    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:32.604336    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:32.604419    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:32.614988    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:32.615065    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:32.625539    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:32.625622    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:32.637298    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:32.637381    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:32.647793    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:32.647873    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:32.659770    8424 logs.go:282] 0 containers: []
	W1007 05:05:32.659785    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:32.659856    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:32.670768    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:32.670787    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:32.670793    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:32.706746    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:32.706758    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:32.718685    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:32.718698    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:32.730745    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:32.730757    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:32.743637    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:32.743648    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:32.763663    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:32.763674    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:32.776142    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:32.776153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:32.801749    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:32.801757    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:32.806714    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:32.806720    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:32.824822    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:32.824848    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:32.837288    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:32.837300    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:32.875035    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:32.875046    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:32.889239    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:32.889249    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:32.903084    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:32.903094    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:32.917883    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:32.917894    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:35.431816    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:38.921722    8853 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001660 seconds
	I1007 05:05:38.921805    8853 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:05:38.925064    8853 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:05:39.443359    8853 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:05:39.443662    8853 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-013000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:05:39.948423    8853 kubeadm.go:310] [bootstrap-token] Using token: wgoviq.b6o62yjruw2arzai
	I1007 05:05:39.954757    8853 out.go:235]   - Configuring RBAC rules ...
	I1007 05:05:39.954814    8853 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:05:39.954850    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:05:39.962158    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:05:39.963148    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:05:39.964046    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:05:39.965017    8853 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:05:39.968278    8853 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:05:40.133868    8853 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:05:40.352944    8853 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:05:40.353360    8853 kubeadm.go:310] 
	I1007 05:05:40.353388    8853 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:05:40.353397    8853 kubeadm.go:310] 
	I1007 05:05:40.353441    8853 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:05:40.353444    8853 kubeadm.go:310] 
	I1007 05:05:40.353463    8853 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:05:40.353495    8853 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:05:40.353523    8853 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:05:40.353528    8853 kubeadm.go:310] 
	I1007 05:05:40.353560    8853 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:05:40.353564    8853 kubeadm.go:310] 
	I1007 05:05:40.353586    8853 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:05:40.353590    8853 kubeadm.go:310] 
	I1007 05:05:40.353614    8853 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:05:40.353659    8853 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:05:40.353699    8853 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:05:40.353705    8853 kubeadm.go:310] 
	I1007 05:05:40.353742    8853 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:05:40.353781    8853 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:05:40.353784    8853 kubeadm.go:310] 
	I1007 05:05:40.353831    8853 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wgoviq.b6o62yjruw2arzai \
	I1007 05:05:40.353885    8853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 \
	I1007 05:05:40.353896    8853 kubeadm.go:310] 	--control-plane 
	I1007 05:05:40.353899    8853 kubeadm.go:310] 
	I1007 05:05:40.353950    8853 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:05:40.353953    8853 kubeadm.go:310] 
	I1007 05:05:40.353995    8853 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wgoviq.b6o62yjruw2arzai \
	I1007 05:05:40.354044    8853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 
	I1007 05:05:40.354199    8853 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:05:40.354291    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:05:40.354301    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:05:40.358500    8853 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:05:40.368537    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:05:40.371777    8853 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:05:40.376412    8853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:05:40.376468    8853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:05:40.376504    8853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-013000 minikube.k8s.io/updated_at=2024_10_07T05_05_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=stopped-upgrade-013000 minikube.k8s.io/primary=true
	I1007 05:05:40.413726    8853 kubeadm.go:1113] duration metric: took 37.304ms to wait for elevateKubeSystemPrivileges
	I1007 05:05:40.413747    8853 ops.go:34] apiserver oom_adj: -16
	I1007 05:05:40.413753    8853 kubeadm.go:394] duration metric: took 4m10.296082s to StartCluster
	I1007 05:05:40.413763    8853 settings.go:142] acquiring lock: {Name:mk5872a0c73b3208924793fa59bf550628bdf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:05:40.413836    8853 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:05:40.414274    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:05:40.414476    8853 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:05:40.414575    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:05:40.414553    8853 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:05:40.414596    8853 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-013000"
	I1007 05:05:40.414602    8853 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-013000"
	I1007 05:05:40.414606    8853 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-013000"
	W1007 05:05:40.414606    8853 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:05:40.414617    8853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-013000"
	I1007 05:05:40.414629    8853 host.go:66] Checking if "stopped-upgrade-013000" exists ...
	I1007 05:05:40.420493    8853 out.go:177] * Verifying Kubernetes components...
	I1007 05:05:40.421171    8853 kapi.go:59] client config for stopped-upgrade-013000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104147ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:05:40.426796    8853 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-013000"
	W1007 05:05:40.426801    8853 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:05:40.426808    8853 host.go:66] Checking if "stopped-upgrade-013000" exists ...
	I1007 05:05:40.427321    8853 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:05:40.427326    8853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:05:40.427330    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:05:40.432490    8853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:05:40.436507    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:05:40.440548    8853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:05:40.440567    8853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:05:40.440584    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:05:40.538887    8853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:05:40.544084    8853 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:05:40.544170    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:05:40.548122    8853 api_server.go:72] duration metric: took 133.634583ms to wait for apiserver process to appear ...
	I1007 05:05:40.548132    8853 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:05:40.548140    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:40.569596    8853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:05:40.588589    8853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:05:40.949096    8853 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:05:40.949108    8853 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:05:40.432540    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:40.432615    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:40.444261    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:40.444336    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:40.455373    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:40.455450    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:40.467953    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:40.468035    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:40.478418    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:40.478500    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:40.493120    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:40.493203    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:40.506632    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:40.506711    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:40.517984    8424 logs.go:282] 0 containers: []
	W1007 05:05:40.517997    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:40.518059    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:40.529667    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:40.529685    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:40.529692    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:40.569860    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:40.569869    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:40.589434    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:40.589443    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:40.602866    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:40.602879    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:40.616485    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:40.616501    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:40.634404    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:40.634418    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:40.647239    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:40.647250    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:40.673332    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:40.673352    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:40.686664    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:40.686676    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:40.723201    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:40.723215    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:40.737813    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:40.737828    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:40.750250    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:40.750262    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:40.767357    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:40.767371    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:40.772495    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:40.772507    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:40.787228    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:40.787240    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:45.548410    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:45.548471    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:43.301347    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:50.548805    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:50.548826    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:48.301662    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:48.301775    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:48.320203    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:48.320301    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:48.331308    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:48.331388    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:48.342213    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:48.342291    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:48.352810    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:48.352889    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:48.363312    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:48.363385    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:48.374372    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:48.374441    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:48.388435    8424 logs.go:282] 0 containers: []
	W1007 05:05:48.388450    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:48.388518    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:48.399451    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:48.399467    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:48.399473    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:48.422921    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:48.422930    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:48.438210    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:48.438222    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:48.449559    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:48.449575    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:48.475845    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:48.475855    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:48.493700    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:48.493711    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:48.526710    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:48.526722    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:48.531056    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:48.531064    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:48.548407    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:48.548419    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:48.564792    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:48.564807    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:48.576351    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:48.576362    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:48.589836    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:48.589848    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:05:48.602199    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:48.602210    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:48.614413    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:48.614429    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:48.649646    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:48.649657    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:51.166119    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:55.550147    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:55.550171    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:56.168414    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:56.168618    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:56.190997    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:05:56.191130    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:56.207034    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:05:56.207120    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:56.220450    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:05:56.220532    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:56.231054    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:05:56.231133    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:56.241316    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:05:56.241395    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:56.251562    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:05:56.251638    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:56.265428    8424 logs.go:282] 0 containers: []
	W1007 05:05:56.265438    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:56.265501    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:56.276325    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:05:56.276348    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:56.276354    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:56.281108    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:05:56.281117    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:05:56.293459    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:56.293470    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:56.326327    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:05:56.326335    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:05:56.341453    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:05:56.341466    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:05:56.353605    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:56.353616    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:56.379007    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:05:56.379014    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:56.391018    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:05:56.391029    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:05:56.404245    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:05:56.404257    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:05:56.420465    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:05:56.420478    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:00.550412    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:00.550458    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:56.433442    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:56.433453    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:56.475367    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:05:56.475382    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:05:56.489394    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:05:56.489403    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:05:56.503821    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:05:56.503836    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:05:56.520442    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:05:56.520458    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:05:59.040491    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:05.550830    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:05.550869    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:04.042803    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:04.043040    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:04.063090    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:04.063208    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:04.077316    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:04.077409    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:04.089972    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:04.090049    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:04.100786    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:04.100867    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:04.111415    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:04.111493    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:04.121991    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:04.122064    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:04.132216    8424 logs.go:282] 0 containers: []
	W1007 05:06:04.132228    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:04.132295    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:04.142741    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:04.142759    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:04.142765    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:04.177169    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:04.177183    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:04.192094    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:04.192105    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:04.204141    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:04.204153    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:04.216247    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:04.216259    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:04.220849    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:04.220859    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:04.234596    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:04.234608    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:04.246637    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:04.246653    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:04.265248    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:04.265259    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:04.277360    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:04.277372    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:04.301680    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:04.301689    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:04.336093    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:04.336118    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:04.348220    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:04.348229    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:04.360927    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:04.360942    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:04.372288    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:04.372299    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:10.551391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:10.551415    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:06:10.951295    8853 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:06:10.954688    8853 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:06:10.968071    8853 addons.go:510] duration metric: took 30.553627916s for enable addons: enabled=[storage-provisioner]
	I1007 05:06:06.888767    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:15.551959    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:15.552016    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:11.890934    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:11.891026    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:11.901709    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:11.901785    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:11.912600    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:11.912687    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:11.922791    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:11.922864    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:11.933438    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:11.933516    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:11.949719    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:11.949788    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:11.959958    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:11.960032    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:11.970726    8424 logs.go:282] 0 containers: []
	W1007 05:06:11.970736    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:11.970796    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:11.981371    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:11.981392    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:11.981397    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:11.986032    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:11.986041    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:11.997807    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:11.997819    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:12.015869    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:12.015881    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:12.027320    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:12.027335    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:12.059992    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:12.060003    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:12.071737    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:12.071749    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:12.090456    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:12.090473    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:12.102225    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:12.102238    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:12.120619    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:12.120634    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:12.139065    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:12.139075    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:12.163485    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:12.163495    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:12.197821    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:12.197832    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:12.212869    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:12.212881    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:12.225225    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:12.225239    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:14.741907    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:20.552769    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:20.552810    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:19.744270    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:19.744487    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:19.757999    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:19.758092    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:19.769077    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:19.769151    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:19.780310    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:19.780398    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:19.792325    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:19.792401    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:19.807415    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:19.807487    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:19.817653    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:19.817792    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:19.828650    8424 logs.go:282] 0 containers: []
	W1007 05:06:19.828659    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:19.828720    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:19.839376    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:19.839391    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:19.839396    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:19.843665    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:19.843672    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:19.858708    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:19.858716    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:19.870630    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:19.870639    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:19.882136    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:19.882149    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:19.918735    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:19.918746    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:19.933175    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:19.933189    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:19.945076    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:19.945086    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:19.960911    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:19.960919    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:19.972699    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:19.972708    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:20.006505    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:20.006512    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:20.017783    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:20.017791    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:20.029025    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:20.029035    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:20.041489    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:20.041500    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:20.058935    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:20.058950    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:25.553777    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:25.553855    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:22.584127    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:30.555068    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:30.555110    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:27.586453    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:27.586616    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:27.601999    8424 logs.go:282] 1 containers: [a249e3838cce]
	I1007 05:06:27.602096    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:27.613157    8424 logs.go:282] 1 containers: [4ade76321e55]
	I1007 05:06:27.613234    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:27.624376    8424 logs.go:282] 4 containers: [b3a4ad1dc3c0 b5e78bd6a887 447efd5b173e dcd1d90b7fbb]
	I1007 05:06:27.624455    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:27.634585    8424 logs.go:282] 1 containers: [b6893261d173]
	I1007 05:06:27.634666    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:27.645238    8424 logs.go:282] 1 containers: [6d70296b5ae9]
	I1007 05:06:27.645314    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:27.656132    8424 logs.go:282] 1 containers: [f2290d081651]
	I1007 05:06:27.656205    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:27.666550    8424 logs.go:282] 0 containers: []
	W1007 05:06:27.666562    8424 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:27.666627    8424 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:27.677178    8424 logs.go:282] 1 containers: [4814fa4df319]
	I1007 05:06:27.677197    8424 logs.go:123] Gathering logs for etcd [4ade76321e55] ...
	I1007 05:06:27.677202    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade76321e55"
	I1007 05:06:27.690940    8424 logs.go:123] Gathering logs for coredns [b5e78bd6a887] ...
	I1007 05:06:27.690949    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e78bd6a887"
	I1007 05:06:27.702268    8424 logs.go:123] Gathering logs for container status ...
	I1007 05:06:27.702279    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:27.714025    8424 logs.go:123] Gathering logs for kube-scheduler [b6893261d173] ...
	I1007 05:06:27.714037    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6893261d173"
	I1007 05:06:27.729120    8424 logs.go:123] Gathering logs for kube-controller-manager [f2290d081651] ...
	I1007 05:06:27.729129    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2290d081651"
	I1007 05:06:27.746322    8424 logs.go:123] Gathering logs for storage-provisioner [4814fa4df319] ...
	I1007 05:06:27.746338    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4814fa4df319"
	I1007 05:06:27.757843    8424 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:27.757852    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:27.782385    8424 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:27.782395    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:27.817434    8424 logs.go:123] Gathering logs for coredns [b3a4ad1dc3c0] ...
	I1007 05:06:27.817441    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a4ad1dc3c0"
	I1007 05:06:27.833435    8424 logs.go:123] Gathering logs for coredns [447efd5b173e] ...
	I1007 05:06:27.833447    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 447efd5b173e"
	I1007 05:06:27.846123    8424 logs.go:123] Gathering logs for kube-proxy [6d70296b5ae9] ...
	I1007 05:06:27.846136    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d70296b5ae9"
	I1007 05:06:27.858312    8424 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:27.858327    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:27.862775    8424 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:27.862781    8424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:27.898569    8424 logs.go:123] Gathering logs for kube-apiserver [a249e3838cce] ...
	I1007 05:06:27.898582    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a249e3838cce"
	I1007 05:06:27.913385    8424 logs.go:123] Gathering logs for coredns [dcd1d90b7fbb] ...
	I1007 05:06:27.913399    8424 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd1d90b7fbb"
	I1007 05:06:30.425814    8424 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:35.428038    8424 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:35.432208    8424 out.go:201] 
	W1007 05:06:35.436015    8424 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:06:35.436022    8424 out.go:270] * 
	W1007 05:06:35.436543    8424 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:06:35.451974    8424 out.go:201] 
	I1007 05:06:35.556601    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:35.556619    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:40.558416    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:40.558552    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:40.572958    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:06:40.573040    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:40.586641    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:06:40.586715    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:40.598313    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:06:40.598387    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:40.610094    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:06:40.610165    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:40.626054    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:06:40.626130    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:40.640842    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:06:40.640921    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:40.652053    8853 logs.go:282] 0 containers: []
	W1007 05:06:40.652065    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:40.652125    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:40.663319    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:06:40.663337    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:06:40.663342    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:06:40.678891    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:06:40.678902    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:06:40.693738    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:06:40.693748    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:06:40.708161    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:06:40.708172    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:06:40.724232    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:06:40.724242    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:06:40.746639    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:40.746651    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:40.771517    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:06:40.771525    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:40.790567    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:40.790579    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:40.828034    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:40.828043    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:40.832560    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:40.832567    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:40.868649    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:06:40.868663    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:06:40.882031    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:06:40.882042    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:06:40.894403    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:06:40.894418    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:06:43.408745    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-10-07 11:57:46 UTC, ends at Mon 2024-10-07 12:06:51 UTC. --
	Oct 07 12:06:35 running-upgrade-802000 dockerd[3239]: time="2024-10-07T12:06:35.704728547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 07 12:06:35 running-upgrade-802000 dockerd[3239]: time="2024-10-07T12:06:35.704784128Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2dd45446be72ff710bc7f2f32b935ca7f982189e987770a158cfea896d865a51 pid=18820 runtime=io.containerd.runc.v2
	Oct 07 12:06:35 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:35Z" level=error msg="ContainerStats resp: {0x40004e6b80 linux}"
	Oct 07 12:06:35 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:35Z" level=error msg="ContainerStats resp: {0x40004e72c0 linux}"
	Oct 07 12:06:36 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:36Z" level=error msg="ContainerStats resp: {0x40007a29c0 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x40007372c0 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x4000737740 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x40007a3b80 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x40008d4000 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x400079a5c0 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x400079a900 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=error msg="ContainerStats resp: {0x40008d50c0 linux}"
	Oct 07 12:06:37 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:06:42 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:06:47 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:47Z" level=error msg="ContainerStats resp: {0x4000595680 linux}"
	Oct 07 12:06:47 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:47Z" level=error msg="ContainerStats resp: {0x4000736e40 linux}"
	Oct 07 12:06:47 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:06:48 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:48Z" level=error msg="ContainerStats resp: {0x40007a37c0 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400035b280 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400079a040 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400035b2c0 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400035bf00 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400079b240 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x400079b900 linux}"
	Oct 07 12:06:49 running-upgrade-802000 cri-dockerd[3075]: time="2024-10-07T12:06:49Z" level=error msg="ContainerStats resp: {0x40004e70c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2dd45446be72f       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   7adc61423fd0f
	05ad251d43bc1       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   988895131ef5b
	b3a4ad1dc3c02       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   7adc61423fd0f
	b5e78bd6a8878       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   988895131ef5b
	4814fa4df3196       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   d06de74d58a78
	6d70296b5ae92       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fad930463bbb4
	4ade76321e556       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   61744a885777d
	a249e3838cce2       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   7175612c96ac7
	f2290d0816512       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8cd3320b6d671
	b6893261d173c       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   178b0d5aa0f32
	
	
	==> coredns [05ad251d43bc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 752522557787341317.6936857180326530477. HINFO: read udp 10.244.0.2:43322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 752522557787341317.6936857180326530477. HINFO: read udp 10.244.0.2:55853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 752522557787341317.6936857180326530477. HINFO: read udp 10.244.0.2:47248->10.0.2.3:53: i/o timeout
	
	
	==> coredns [2dd45446be72] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2223607481652654394.3406750787861213040. HINFO: read udp 10.244.0.3:41431->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2223607481652654394.3406750787861213040. HINFO: read udp 10.244.0.3:56872->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2223607481652654394.3406750787861213040. HINFO: read udp 10.244.0.3:58536->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b3a4ad1dc3c0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:38092->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:35575->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:47567->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:52350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:33410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:40242->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:48032->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:57867->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:57197->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4251582693607961929.586594459898721803. HINFO: read udp 10.244.0.3:40077->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b5e78bd6a887] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:36952->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:56449->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:34966->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:55352->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:49282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:60035->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:45285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:35392->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:46724->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2818865559786949071.8635811237997130992. HINFO: read udp 10.244.0.2:42157->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-802000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-802000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=running-upgrade-802000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T05_02_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:02:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-802000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:02:34 +0000   Mon, 07 Oct 2024 12:02:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:02:34 +0000   Mon, 07 Oct 2024 12:02:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:02:34 +0000   Mon, 07 Oct 2024 12:02:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:02:34 +0000   Mon, 07 Oct 2024 12:02:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-802000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a2d672bad804d19ab7bbd8291f2deb4
	  System UUID:                5a2d672bad804d19ab7bbd8291f2deb4
	  Boot ID:                    a4a53f2d-6a81-499c-8d25-9e20dc6c8cac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-54g2l                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-drj6w                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-802000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-802000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-802000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-fbvh9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-802000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-802000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-802000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-802000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-802000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-802000 event: Registered Node running-upgrade-802000 in Controller
	
	
	==> dmesg <==
	[  +1.699852] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.071760] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.080532] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.135468] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.071802] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.079812] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[Oct 7 11:58] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +8.144094] systemd-fstab-generator[1957]: Ignoring "noauto" for root device
	[  +3.101379] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.146606] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.099934] systemd-fstab-generator[2282]: Ignoring "noauto" for root device
	[  +0.097873] systemd-fstab-generator[2295]: Ignoring "noauto" for root device
	[  +3.168481] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.210793] systemd-fstab-generator[3031]: Ignoring "noauto" for root device
	[  +0.086070] systemd-fstab-generator[3043]: Ignoring "noauto" for root device
	[  +0.084327] systemd-fstab-generator[3054]: Ignoring "noauto" for root device
	[  +0.087596] systemd-fstab-generator[3068]: Ignoring "noauto" for root device
	[  +2.317796] systemd-fstab-generator[3220]: Ignoring "noauto" for root device
	[  +2.713171] systemd-fstab-generator[3601]: Ignoring "noauto" for root device
	[  +1.155161] systemd-fstab-generator[3778]: Ignoring "noauto" for root device
	[ +19.542811] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 7 11:59] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 7 12:02] systemd-fstab-generator[11923]: Ignoring "noauto" for root device
	[  +5.636265] systemd-fstab-generator[12523]: Ignoring "noauto" for root device
	[  +0.477271] systemd-fstab-generator[12658]: Ignoring "noauto" for root device
	
	
	==> etcd [4ade76321e55] <==
	{"level":"info","ts":"2024-10-07T12:02:29.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-07T12:02:29.975Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-07T12:02:29.976Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T12:02:29.976Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T12:02:29.976Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T12:02:29.976Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-07T12:02:29.976Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-07T12:02:30.409Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-802000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:02:30.410Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:02:30.412Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T12:02:30.412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T12:02:30.412Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T12:02:30.417Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 12:06:51 up 9 min,  0 users,  load average: 0.16, 0.26, 0.17
	Linux running-upgrade-802000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a249e3838cce] <==
	I1007 12:02:31.604723       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1007 12:02:31.611444       1 controller.go:611] quota admission added evaluator for: namespaces
	I1007 12:02:31.659266       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1007 12:02:31.660589       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:02:31.660883       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1007 12:02:31.665457       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:02:31.669294       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1007 12:02:32.401073       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 12:02:32.562528       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1007 12:02:32.564458       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1007 12:02:32.564472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 12:02:32.708566       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 12:02:32.721483       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 12:02:32.816335       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1007 12:02:32.818293       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1007 12:02:32.818606       1 controller.go:611] quota admission added evaluator for: endpoints
	I1007 12:02:32.820020       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:02:33.695069       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1007 12:02:34.385255       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1007 12:02:34.388859       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1007 12:02:34.413953       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1007 12:02:34.474592       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:02:46.852474       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:02:47.450186       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1007 12:02:48.016469       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [f2290d081651] <==
	I1007 12:02:46.856898       1 shared_informer.go:262] Caches are synced for node
	I1007 12:02:46.856986       1 range_allocator.go:173] Starting range CIDR allocator
	I1007 12:02:46.857015       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1007 12:02:46.857036       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1007 12:02:46.857367       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fbvh9"
	I1007 12:02:46.861215       1 shared_informer.go:262] Caches are synced for service account
	I1007 12:02:46.862008       1 range_allocator.go:374] Set node running-upgrade-802000 PodCIDR to [10.244.0.0/24]
	I1007 12:02:46.863702       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1007 12:02:46.865224       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1007 12:02:46.867601       1 shared_informer.go:262] Caches are synced for stateful set
	I1007 12:02:46.877199       1 shared_informer.go:262] Caches are synced for crt configmap
	I1007 12:02:46.895551       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1007 12:02:46.895560       1 shared_informer.go:262] Caches are synced for deployment
	I1007 12:02:46.945295       1 shared_informer.go:262] Caches are synced for job
	I1007 12:02:46.968860       1 shared_informer.go:262] Caches are synced for cronjob
	I1007 12:02:46.996543       1 shared_informer.go:262] Caches are synced for attach detach
	I1007 12:02:47.017487       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1007 12:02:47.059879       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 12:02:47.085040       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 12:02:47.451258       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1007 12:02:47.495050       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 12:02:47.495134       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 12:02:47.497126       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 12:02:47.553007       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-drj6w"
	I1007 12:02:47.555740       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-54g2l"
	
	
	==> kube-proxy [6d70296b5ae9] <==
	I1007 12:02:47.985361       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1007 12:02:47.985396       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1007 12:02:47.985407       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1007 12:02:48.013309       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1007 12:02:48.013319       1 server_others.go:206] "Using iptables Proxier"
	I1007 12:02:48.013947       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1007 12:02:48.014147       1 server.go:661] "Version info" version="v1.24.1"
	I1007 12:02:48.014153       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:02:48.014446       1 config.go:317] "Starting service config controller"
	I1007 12:02:48.014458       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1007 12:02:48.014953       1 config.go:226] "Starting endpoint slice config controller"
	I1007 12:02:48.014971       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1007 12:02:48.015251       1 config.go:444] "Starting node config controller"
	I1007 12:02:48.015274       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1007 12:02:48.118465       1 shared_informer.go:262] Caches are synced for node config
	I1007 12:02:48.118503       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1007 12:02:48.118522       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b6893261d173] <==
	W1007 12:02:31.609454       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 12:02:31.609593       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1007 12:02:31.609463       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:02:31.609597       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1007 12:02:31.609473       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:02:31.609600       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1007 12:02:31.609526       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:02:31.609604       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1007 12:02:31.609384       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:31.609609       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1007 12:02:31.609362       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:31.609613       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1007 12:02:31.609630       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:31.609634       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1007 12:02:32.454531       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:32.454548       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1007 12:02:32.499498       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:02:32.499508       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1007 12:02:32.570508       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:02:32.570529       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1007 12:02:32.619420       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:32.619516       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1007 12:02:32.643875       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:02:32.643942       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1007 12:02:33.207049       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-10-07 11:57:46 UTC, ends at Mon 2024-10-07 12:06:51 UTC. --
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.864199   12529 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.930725   12529 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.930898   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7fcca6ba-95e6-421c-abda-55aee3fa6a5f-tmp\") pod \"storage-provisioner\" (UID: \"7fcca6ba-95e6-421c-abda-55aee3fa6a5f\") " pod="kube-system/storage-provisioner"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.930956   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhs8n\" (UniqueName: \"kubernetes.io/projected/7fcca6ba-95e6-421c-abda-55aee3fa6a5f-kube-api-access-zhs8n\") pod \"storage-provisioner\" (UID: \"7fcca6ba-95e6-421c-abda-55aee3fa6a5f\") " pod="kube-system/storage-provisioner"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.930989   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f553563-87fb-4501-ab4b-5576cd8cdea3-kube-proxy\") pod \"kube-proxy-fbvh9\" (UID: \"9f553563-87fb-4501-ab4b-5576cd8cdea3\") " pod="kube-system/kube-proxy-fbvh9"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.931065   12529 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.931159   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f553563-87fb-4501-ab4b-5576cd8cdea3-lib-modules\") pod \"kube-proxy-fbvh9\" (UID: \"9f553563-87fb-4501-ab4b-5576cd8cdea3\") " pod="kube-system/kube-proxy-fbvh9"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.931208   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbww\" (UniqueName: \"kubernetes.io/projected/9f553563-87fb-4501-ab4b-5576cd8cdea3-kube-api-access-vjbww\") pod \"kube-proxy-fbvh9\" (UID: \"9f553563-87fb-4501-ab4b-5576cd8cdea3\") " pod="kube-system/kube-proxy-fbvh9"
	Oct 07 12:02:46 running-upgrade-802000 kubelet[12529]: I1007 12:02:46.931249   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f553563-87fb-4501-ab4b-5576cd8cdea3-xtables-lock\") pod \"kube-proxy-fbvh9\" (UID: \"9f553563-87fb-4501-ab4b-5576cd8cdea3\") " pod="kube-system/kube-proxy-fbvh9"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.036627   12529 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.036650   12529 projected.go:192] Error preparing data for projected volume kube-api-access-vjbww for pod kube-system/kube-proxy-fbvh9: configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.036700   12529 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/9f553563-87fb-4501-ab4b-5576cd8cdea3-kube-api-access-vjbww podName:9f553563-87fb-4501-ab4b-5576cd8cdea3 nodeName:}" failed. No retries permitted until 2024-10-07 12:02:47.536687494 +0000 UTC m=+13.161656089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vjbww" (UniqueName: "kubernetes.io/projected/9f553563-87fb-4501-ab4b-5576cd8cdea3-kube-api-access-vjbww") pod "kube-proxy-fbvh9" (UID: "9f553563-87fb-4501-ab4b-5576cd8cdea3") : configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.182324   12529 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.182351   12529 projected.go:192] Error preparing data for projected volume kube-api-access-zhs8n for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: E1007 12:02:47.182392   12529 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/7fcca6ba-95e6-421c-abda-55aee3fa6a5f-kube-api-access-zhs8n podName:7fcca6ba-95e6-421c-abda-55aee3fa6a5f nodeName:}" failed. No retries permitted until 2024-10-07 12:02:47.68237609 +0000 UTC m=+13.307344726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhs8n" (UniqueName: "kubernetes.io/projected/7fcca6ba-95e6-421c-abda-55aee3fa6a5f-kube-api-access-zhs8n") pod "storage-provisioner" (UID: "7fcca6ba-95e6-421c-abda-55aee3fa6a5f") : configmap "kube-root-ca.crt" not found
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.556367   12529 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.564449   12529 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.739291   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfd9\" (UniqueName: \"kubernetes.io/projected/acda1803-07b5-450e-9f46-e7e69c3270b8-kube-api-access-kcfd9\") pod \"coredns-6d4b75cb6d-drj6w\" (UID: \"acda1803-07b5-450e-9f46-e7e69c3270b8\") " pod="kube-system/coredns-6d4b75cb6d-drj6w"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.739338   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acda1803-07b5-450e-9f46-e7e69c3270b8-config-volume\") pod \"coredns-6d4b75cb6d-drj6w\" (UID: \"acda1803-07b5-450e-9f46-e7e69c3270b8\") " pod="kube-system/coredns-6d4b75cb6d-drj6w"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.739355   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdz29\" (UniqueName: \"kubernetes.io/projected/49dcc6d5-57a9-4af2-8609-e2492469a978-kube-api-access-bdz29\") pod \"coredns-6d4b75cb6d-54g2l\" (UID: \"49dcc6d5-57a9-4af2-8609-e2492469a978\") " pod="kube-system/coredns-6d4b75cb6d-54g2l"
	Oct 07 12:02:47 running-upgrade-802000 kubelet[12529]: I1007 12:02:47.739366   12529 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dcc6d5-57a9-4af2-8609-e2492469a978-config-volume\") pod \"coredns-6d4b75cb6d-54g2l\" (UID: \"49dcc6d5-57a9-4af2-8609-e2492469a978\") " pod="kube-system/coredns-6d4b75cb6d-54g2l"
	Oct 07 12:02:48 running-upgrade-802000 kubelet[12529]: I1007 12:02:48.660058   12529 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="988895131ef5bdf09774bf96d8f6b47a818099e0eaee3abb9a5403aabd844966"
	Oct 07 12:02:48 running-upgrade-802000 kubelet[12529]: I1007 12:02:48.691041   12529 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7adc61423fd0f7e8d1aaa27ad594188824181624434f4c9f5c8d775429acbba3"
	Oct 07 12:06:35 running-upgrade-802000 kubelet[12529]: I1007 12:06:35.789220   12529 scope.go:110] "RemoveContainer" containerID="dcd1d90b7fbb0165c36200d7e06e032bc3eec5b811276a297ec36be7d63cd130"
	Oct 07 12:06:35 running-upgrade-802000 kubelet[12529]: I1007 12:06:35.802755   12529 scope.go:110] "RemoveContainer" containerID="447efd5b173ed98f4cbb32d4c1eb3b22a5475e9cf08aa521ec15e7327df05337"
	
	
	==> storage-provisioner [4814fa4df319] <==
	I1007 12:02:48.010450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:02:48.018995       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:02:48.019015       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:02:48.022931       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:02:48.023120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-802000_980ba760-b29a-43b4-b386-6a81d69a314b!
	I1007 12:02:48.023178       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7e5787a-662e-47a8-8e8c-a3e2c70acc54", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-802000_980ba760-b29a-43b4-b386-6a81d69a314b became leader
	I1007 12:02:48.124253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-802000_980ba760-b29a-43b4-b386-6a81d69a314b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-802000 -n running-upgrade-802000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-802000 -n running-upgrade-802000: exit status 2 (15.611419375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-802000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-802000
--- FAIL: TestRunningBinaryUpgrade (614.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.775160334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-530000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:59:53.681914    8482 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:59:53.682084    8482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:59:53.682087    8482 out.go:358] Setting ErrFile to fd 2...
	I1007 04:59:53.682090    8482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:59:53.682244    8482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:59:53.683411    8482 out.go:352] Setting JSON to false
	I1007 04:59:53.701730    8482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5364,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:59:53.701795    8482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:59:53.707677    8482 out.go:177] * [kubernetes-upgrade-530000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:59:53.715611    8482 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:59:53.715635    8482 notify.go:220] Checking for updates...
	I1007 04:59:53.722650    8482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:59:53.730424    8482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:59:53.737600    8482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:59:53.741576    8482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:59:53.744564    8482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:59:53.752055    8482 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:59:53.752125    8482 config.go:182] Loaded profile config "running-upgrade-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 04:59:53.752178    8482 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:59:53.756637    8482 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 04:59:53.763624    8482 start.go:297] selected driver: qemu2
	I1007 04:59:53.763636    8482 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:59:53.763644    8482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:59:53.766284    8482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:59:53.769651    8482 out.go:177] * Automatically selected the socket_vmnet network
	I1007 04:59:53.772641    8482 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:59:53.772653    8482 cni.go:84] Creating CNI manager for ""
	I1007 04:59:53.772681    8482 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 04:59:53.772711    8482 start.go:340] cluster config:
	{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:59:53.777022    8482 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:59:53.785573    8482 out.go:177] * Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	I1007 04:59:53.789605    8482 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:59:53.789634    8482 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 04:59:53.789647    8482 cache.go:56] Caching tarball of preloaded images
	I1007 04:59:53.789743    8482 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 04:59:53.789748    8482 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 04:59:53.789817    8482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I1007 04:59:53.789827    8482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kubernetes-upgrade-530000/config.json: {Name:mkdbb8bceaf1b46c78b5ec8060981cfcda93d94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:59:53.790126    8482 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 04:59:53.790170    8482 start.go:364] duration metric: took 36.959µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I1007 04:59:53.790181    8482 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 04:59:53.790221    8482 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 04:59:53.798617    8482 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 04:59:53.813268    8482 start.go:159] libmachine.API.Create for "kubernetes-upgrade-530000" (driver="qemu2")
	I1007 04:59:53.813290    8482 client.go:168] LocalClient.Create starting
	I1007 04:59:53.813361    8482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 04:59:53.813400    8482 main.go:141] libmachine: Decoding PEM data...
	I1007 04:59:53.813410    8482 main.go:141] libmachine: Parsing certificate...
	I1007 04:59:53.813452    8482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 04:59:53.813481    8482 main.go:141] libmachine: Decoding PEM data...
	I1007 04:59:53.813489    8482 main.go:141] libmachine: Parsing certificate...
	I1007 04:59:53.813971    8482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 04:59:53.987616    8482 main.go:141] libmachine: Creating SSH key...
	I1007 04:59:54.044910    8482 main.go:141] libmachine: Creating Disk image...
	I1007 04:59:54.044920    8482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 04:59:54.045105    8482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 04:59:54.055271    8482 main.go:141] libmachine: STDOUT: 
	I1007 04:59:54.055290    8482 main.go:141] libmachine: STDERR: 
	I1007 04:59:54.055343    8482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2 +20000M
	I1007 04:59:54.064058    8482 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 04:59:54.064074    8482 main.go:141] libmachine: STDERR: 
	I1007 04:59:54.064086    8482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 04:59:54.064093    8482 main.go:141] libmachine: Starting QEMU VM...
	I1007 04:59:54.064105    8482 qemu.go:418] Using hvf for hardware acceleration
	I1007 04:59:54.064153    8482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a0:fb:06:b7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 04:59:54.065946    8482 main.go:141] libmachine: STDOUT: 
	I1007 04:59:54.065962    8482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 04:59:54.065992    8482 client.go:171] duration metric: took 252.695542ms to LocalClient.Create
	I1007 04:59:56.068217    8482 start.go:128] duration metric: took 2.277969667s to createHost
	I1007 04:59:56.068296    8482 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 2.278124209s
	W1007 04:59:56.068354    8482 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:59:56.080723    8482 out.go:177] * Deleting "kubernetes-upgrade-530000" in qemu2 ...
	W1007 04:59:56.104210    8482 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 04:59:56.104241    8482 start.go:729] Will try again in 5 seconds ...
	I1007 05:00:01.106407    8482 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:00:01.106720    8482 start.go:364] duration metric: took 214.584µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I1007 05:00:01.106771    8482 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:00:01.106953    8482 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:00:01.116423    8482 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:00:01.157035    8482 start.go:159] libmachine.API.Create for "kubernetes-upgrade-530000" (driver="qemu2")
	I1007 05:00:01.157093    8482 client.go:168] LocalClient.Create starting
	I1007 05:00:01.157284    8482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:00:01.157411    8482 main.go:141] libmachine: Decoding PEM data...
	I1007 05:00:01.157431    8482 main.go:141] libmachine: Parsing certificate...
	I1007 05:00:01.157510    8482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:00:01.157578    8482 main.go:141] libmachine: Decoding PEM data...
	I1007 05:00:01.157593    8482 main.go:141] libmachine: Parsing certificate...
	I1007 05:00:01.158312    8482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:00:01.315486    8482 main.go:141] libmachine: Creating SSH key...
	I1007 05:00:01.364947    8482 main.go:141] libmachine: Creating Disk image...
	I1007 05:00:01.364952    8482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:00:01.365126    8482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 05:00:01.374704    8482 main.go:141] libmachine: STDOUT: 
	I1007 05:00:01.374724    8482 main.go:141] libmachine: STDERR: 
	I1007 05:00:01.374784    8482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2 +20000M
	I1007 05:00:01.383222    8482 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:00:01.383237    8482 main.go:141] libmachine: STDERR: 
	I1007 05:00:01.383248    8482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 05:00:01.383253    8482 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:00:01.383261    8482 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:00:01.383300    8482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3b:82:98:cd:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 05:00:01.385041    8482 main.go:141] libmachine: STDOUT: 
	I1007 05:00:01.385055    8482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:00:01.385069    8482 client.go:171] duration metric: took 227.970334ms to LocalClient.Create
	I1007 05:00:03.387225    8482 start.go:128] duration metric: took 2.280249625s to createHost
	I1007 05:00:03.387282    8482 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 2.280555667s
	W1007 05:00:03.387559    8482 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:00:03.396033    8482 out.go:201] 
	W1007 05:00:03.400237    8482 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:00:03.400274    8482 out.go:270] * 
	* 
	W1007 05:00:03.401608    8482 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:00:03.412033    8482 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-530000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-530000: (3.280447625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-530000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-530000 status --format={{.Host}}: exit status 7 (65.257125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189969791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:00:06.805273    8819 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:00:06.805426    8819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:00:06.805430    8819 out.go:358] Setting ErrFile to fd 2...
	I1007 05:00:06.805432    8819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:00:06.805571    8819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:00:06.806619    8819 out.go:352] Setting JSON to false
	I1007 05:00:06.824479    8819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5377,"bootTime":1728297029,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:00:06.824553    8819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:00:06.829827    8819 out.go:177] * [kubernetes-upgrade-530000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:00:06.837756    8819 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:00:06.837823    8819 notify.go:220] Checking for updates...
	I1007 05:00:06.844869    8819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:00:06.847683    8819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:00:06.850732    8819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:00:06.853779    8819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:00:06.856659    8819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:00:06.860016    8819 config.go:182] Loaded profile config "kubernetes-upgrade-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:00:06.860279    8819 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:00:06.864756    8819 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:00:06.871684    8819 start.go:297] selected driver: qemu2
	I1007 05:00:06.871689    8819 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:00:06.871732    8819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:00:06.874203    8819 cni.go:84] Creating CNI manager for ""
	I1007 05:00:06.874240    8819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:00:06.874263    8819 start.go:340] cluster config:
	{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-530000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:00:06.878872    8819 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:00:06.886720    8819 out.go:177] * Starting "kubernetes-upgrade-530000" primary control-plane node in "kubernetes-upgrade-530000" cluster
	I1007 05:00:06.890725    8819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:00:06.890745    8819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:00:06.890754    8819 cache.go:56] Caching tarball of preloaded images
	I1007 05:00:06.890850    8819 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:00:06.890856    8819 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:00:06.890929    8819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I1007 05:00:06.891450    8819 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:00:06.891485    8819 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I1007 05:00:06.891495    8819 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:00:06.891501    8819 fix.go:54] fixHost starting: 
	I1007 05:00:06.891630    8819 fix.go:112] recreateIfNeeded on kubernetes-upgrade-530000: state=Stopped err=<nil>
	W1007 05:00:06.891640    8819 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:00:06.895743    8819 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	I1007 05:00:06.903582    8819 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:00:06.903626    8819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3b:82:98:cd:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 05:00:06.905877    8819 main.go:141] libmachine: STDOUT: 
	I1007 05:00:06.905898    8819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:00:06.905932    8819 fix.go:56] duration metric: took 14.428292ms for fixHost
	I1007 05:00:06.905936    8819 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 14.447125ms
	W1007 05:00:06.905943    8819 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:00:06.905999    8819 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:00:06.906006    8819 start.go:729] Will try again in 5 seconds ...
	I1007 05:00:11.908154    8819 start.go:360] acquireMachinesLock for kubernetes-upgrade-530000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:00:11.908408    8819 start.go:364] duration metric: took 202.458µs to acquireMachinesLock for "kubernetes-upgrade-530000"
	I1007 05:00:11.908483    8819 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:00:11.908494    8819 fix.go:54] fixHost starting: 
	I1007 05:00:11.908841    8819 fix.go:112] recreateIfNeeded on kubernetes-upgrade-530000: state=Stopped err=<nil>
	W1007 05:00:11.908856    8819 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:00:11.913545    8819 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-530000" ...
	I1007 05:00:11.920539    8819 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:00:11.920665    8819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3b:82:98:cd:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubernetes-upgrade-530000/disk.qcow2
	I1007 05:00:11.926321    8819 main.go:141] libmachine: STDOUT: 
	I1007 05:00:11.926356    8819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:00:11.926412    8819 fix.go:56] duration metric: took 17.91825ms for fixHost
	I1007 05:00:11.926422    8819 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 18.001541ms
	W1007 05:00:11.926545    8819 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:00:11.933377    8819 out.go:201] 
	W1007 05:00:11.937496    8819 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:00:11.937517    8819 out.go:270] * 
	* 
	W1007 05:00:11.938530    8819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:00:11.951475    8819 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-530000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-530000 version --output=json: exit status 1 (49.389417ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-530000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-07 05:00:12.012248 -0700 PDT m=+1002.227481835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000: exit status 7 (35.590917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-530000
--- FAIL: TestKubernetesUpgrade (18.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19763
- KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1682088195/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.92s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19763
- KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2646348396/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (564.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2746098465 start -p stopped-upgrade-013000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2746098465 start -p stopped-upgrade-013000 --memory=2200 --vm-driver=qemu2 : (41.063214375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2746098465 -p stopped-upgrade-013000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2746098465 -p stopped-upgrade-013000 stop: (3.089340084s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-013000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-013000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.289370833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-013000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-013000" primary control-plane node in "stopped-upgrade-013000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-013000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:01:01.379427    8853 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:01:01.379587    8853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:01:01.379591    8853 out.go:358] Setting ErrFile to fd 2...
	I1007 05:01:01.379593    8853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:01:01.379734    8853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:01:01.380984    8853 out.go:352] Setting JSON to false
	I1007 05:01:01.399807    8853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5432,"bootTime":1728297029,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:01:01.399871    8853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:01:01.404236    8853 out.go:177] * [stopped-upgrade-013000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:01:01.412247    8853 notify.go:220] Checking for updates...
	I1007 05:01:01.415254    8853 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:01:01.423195    8853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:01:01.431185    8853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:01:01.438197    8853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:01:01.446205    8853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:01:01.454227    8853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:01:01.458596    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:01:01.463283    8853 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:01:01.467220    8853 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:01:01.470206    8853 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:01:01.478217    8853 start.go:297] selected driver: qemu2
	I1007 05:01:01.478223    8853 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:01.478276    8853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:01:01.481225    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:01:01.481268    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:01:01.481294    8853 start.go:340] cluster config:
	{Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:01.481352    8853 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:01:01.493212    8853 out.go:177] * Starting "stopped-upgrade-013000" primary control-plane node in "stopped-upgrade-013000" cluster
	I1007 05:01:01.497289    8853 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:01:01.497310    8853 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 05:01:01.497316    8853 cache.go:56] Caching tarball of preloaded images
	I1007 05:01:01.497403    8853 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:01:01.497409    8853 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 05:01:01.497475    8853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/config.json ...
	I1007 05:01:01.497858    8853 start.go:360] acquireMachinesLock for stopped-upgrade-013000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:01:01.497912    8853 start.go:364] duration metric: took 47.417µs to acquireMachinesLock for "stopped-upgrade-013000"
	I1007 05:01:01.497922    8853 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:01:01.497928    8853 fix.go:54] fixHost starting: 
	I1007 05:01:01.498053    8853 fix.go:112] recreateIfNeeded on stopped-upgrade-013000: state=Stopped err=<nil>
	W1007 05:01:01.498065    8853 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:01:01.501250    8853 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-013000" ...
	I1007 05:01:01.509210    8853 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:01:01.509328    8853 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51449-:22,hostfwd=tcp::51450-:2376,hostname=stopped-upgrade-013000 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/disk.qcow2
	I1007 05:01:01.561946    8853 main.go:141] libmachine: STDOUT: 
	I1007 05:01:01.561965    8853 main.go:141] libmachine: STDERR: 
	I1007 05:01:01.561972    8853 main.go:141] libmachine: Waiting for VM to start (ssh -p 51449 docker@127.0.0.1)...
	I1007 05:01:21.279752    8853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/config.json ...
	I1007 05:01:21.280528    8853 machine.go:93] provisionDockerMachine start ...
	I1007 05:01:21.280747    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.281192    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.281206    8853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 05:01:21.368856    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 05:01:21.368886    8853 buildroot.go:166] provisioning hostname "stopped-upgrade-013000"
	I1007 05:01:21.369047    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.369296    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.369310    8853 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-013000 && echo "stopped-upgrade-013000" | sudo tee /etc/hostname
	I1007 05:01:21.447394    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-013000
	
	I1007 05:01:21.447475    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.447637    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.447650    8853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-013000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-013000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-013000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 05:01:21.515328    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:01:21.515344    8853 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19763-6232/.minikube CaCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19763-6232/.minikube}
	I1007 05:01:21.515353    8853 buildroot.go:174] setting up certificates
	I1007 05:01:21.515359    8853 provision.go:84] configureAuth start
	I1007 05:01:21.515366    8853 provision.go:143] copyHostCerts
	I1007 05:01:21.515460    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem, removing ...
	I1007 05:01:21.515468    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem
	I1007 05:01:21.515610    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.pem (1082 bytes)
	I1007 05:01:21.515833    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem, removing ...
	I1007 05:01:21.515838    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem
	I1007 05:01:21.516021    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/cert.pem (1123 bytes)
	I1007 05:01:21.516234    8853 exec_runner.go:144] found /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem, removing ...
	I1007 05:01:21.516239    8853 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem
	I1007 05:01:21.516310    8853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19763-6232/.minikube/key.pem (1679 bytes)
	I1007 05:01:21.516438    8853 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-013000 san=[127.0.0.1 localhost minikube stopped-upgrade-013000]
	I1007 05:01:21.555323    8853 provision.go:177] copyRemoteCerts
	I1007 05:01:21.555405    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 05:01:21.555415    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:21.589969    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 05:01:21.597803    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 05:01:21.604964    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 05:01:21.612202    8853 provision.go:87] duration metric: took 96.8305ms to configureAuth
	I1007 05:01:21.612210    8853 buildroot.go:189] setting minikube options for container-runtime
	I1007 05:01:21.612322    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:01:21.612375    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.612478    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.612483    8853 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 05:01:21.672209    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 05:01:21.672218    8853 buildroot.go:70] root file system type: tmpfs
	I1007 05:01:21.672269    8853 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 05:01:21.672346    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.672453    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.672490    8853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 05:01:21.735964    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 05:01:21.736025    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:21.736130    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:21.736140    8853 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 05:01:22.107535    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1007 05:01:22.107547    8853 machine.go:96] duration metric: took 827.011459ms to provisionDockerMachine
	I1007 05:01:22.107558    8853 start.go:293] postStartSetup for "stopped-upgrade-013000" (driver="qemu2")
	I1007 05:01:22.107565    8853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 05:01:22.107648    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 05:01:22.107660    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:22.142180    8853 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 05:01:22.143640    8853 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 05:01:22.143646    8853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/addons for local assets ...
	I1007 05:01:22.143732    8853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19763-6232/.minikube/files for local assets ...
	I1007 05:01:22.143876    8853 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem -> 67502.pem in /etc/ssl/certs
	I1007 05:01:22.144041    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 05:01:22.152214    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /etc/ssl/certs/67502.pem (1708 bytes)
	I1007 05:01:22.159150    8853 start.go:296] duration metric: took 51.584334ms for postStartSetup
	I1007 05:01:22.159170    8853 fix.go:56] duration metric: took 20.66130425s for fixHost
	I1007 05:01:22.159240    8853 main.go:141] libmachine: Using SSH client type: native
	I1007 05:01:22.159360    8853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026f21f0] 0x1026f4a30 <nil>  [] 0s} localhost 51449 <nil> <nil>}
	I1007 05:01:22.159365    8853 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 05:01:22.220954    8853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302481.849964171
	
	I1007 05:01:22.220965    8853 fix.go:216] guest clock: 1728302481.849964171
	I1007 05:01:22.220969    8853 fix.go:229] Guest: 2024-10-07 05:01:21.849964171 -0700 PDT Remote: 2024-10-07 05:01:22.159172 -0700 PDT m=+20.806799959 (delta=-309.207829ms)
	I1007 05:01:22.220980    8853 fix.go:200] guest clock delta is within tolerance: -309.207829ms
	I1007 05:01:22.220985    8853 start.go:83] releasing machines lock for "stopped-upgrade-013000", held for 20.723129875s
	I1007 05:01:22.221070    8853 ssh_runner.go:195] Run: cat /version.json
	I1007 05:01:22.221081    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:01:22.221070    8853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 05:01:22.221106    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	W1007 05:01:22.221611    8853 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51449: connect: connection refused
	I1007 05:01:22.221631    8853 retry.go:31] will retry after 346.353974ms: dial tcp [::1]:51449: connect: connection refused
	W1007 05:01:22.251710    8853 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 05:01:22.251754    8853 ssh_runner.go:195] Run: systemctl --version
	I1007 05:01:22.253560    8853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 05:01:22.255223    8853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 05:01:22.255253    8853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 05:01:22.258480    8853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 05:01:22.263390    8853 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 05:01:22.263398    8853 start.go:495] detecting cgroup driver to use...
	I1007 05:01:22.263472    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:01:22.270564    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 05:01:22.274294    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 05:01:22.277381    8853 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 05:01:22.277411    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 05:01:22.280323    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:01:22.283131    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 05:01:22.286667    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:01:22.290081    8853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 05:01:22.293406    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 05:01:22.296217    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 05:01:22.299043    8853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 05:01:22.302405    8853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 05:01:22.305700    8853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 05:01:22.308383    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:22.383690    8853 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 05:01:22.390213    8853 start.go:495] detecting cgroup driver to use...
	I1007 05:01:22.390299    8853 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 05:01:22.396226    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:01:22.403034    8853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 05:01:22.409569    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:01:22.414175    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:01:22.418695    8853 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 05:01:22.450794    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:01:22.455830    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:01:22.461166    8853 ssh_runner.go:195] Run: which cri-dockerd
	I1007 05:01:22.462397    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 05:01:22.465467    8853 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 05:01:22.470572    8853 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 05:01:22.529464    8853 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 05:01:22.611155    8853 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 05:01:22.611233    8853 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 05:01:22.616558    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:22.697081    8853 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:01:23.829929    8853 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132828458s)
	I1007 05:01:23.830020    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 05:01:23.835216    8853 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1007 05:01:23.842070    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:01:23.847326    8853 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 05:01:23.928354    8853 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 05:01:24.004286    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:24.085510    8853 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 05:01:24.091135    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:01:24.095993    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:24.173375    8853 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 05:01:24.213017    8853 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 05:01:24.213139    8853 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 05:01:24.214973    8853 start.go:563] Will wait 60s for crictl version
	I1007 05:01:24.215035    8853 ssh_runner.go:195] Run: which crictl
	I1007 05:01:24.216306    8853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 05:01:24.230967    8853 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 05:01:24.231054    8853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:01:24.247528    8853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:01:24.270584    8853 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 05:01:24.270669    8853 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 05:01:24.271918    8853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:01:24.275457    8853 kubeadm.go:883] updating cluster {Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 05:01:24.275501    8853 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:01:24.275548    8853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:01:24.287362    8853 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:01:24.287370    8853 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:01:24.287429    8853 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:01:24.290763    8853 ssh_runner.go:195] Run: which lz4
	I1007 05:01:24.292721    8853 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 05:01:24.293897    8853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 05:01:24.293906    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 05:01:25.303378    8853 docker.go:649] duration metric: took 1.010704292s to copy over tarball
	I1007 05:01:25.303449    8853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 05:01:26.485450    8853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.181980917s)
	I1007 05:01:26.485464    8853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 05:01:26.501267    8853 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:01:26.504153    8853 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 05:01:26.508935    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:26.588949    8853 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:01:28.184651    8853 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.595690375s)
	I1007 05:01:28.184744    8853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:01:28.197731    8853 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:01:28.197743    8853 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:01:28.197748    8853 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 05:01:28.201664    8853 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:28.203866    8853 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.205654    8853 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:28.206456    8853 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.208216    8853 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.208330    8853 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.209584    8853 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.209923    8853 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.210949    8853 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.211032    8853 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.212206    8853 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.212324    8853 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:01:28.213471    8853 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:28.213511    8853 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.214597    8853 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:01:28.215063    8853 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:28.742234    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.757686    8853 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 05:01:28.757722    8853 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.757795    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:01:28.760179    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.768677    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.768900    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 05:01:28.772536    8853 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 05:01:28.772566    8853 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.772639    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:01:28.785781    8853 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 05:01:28.785842    8853 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.785960    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:01:28.796485    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1007 05:01:28.797414    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 05:01:28.858209    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.868866    8853 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 05:01:28.868891    8853 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.868964    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:01:28.881526    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 05:01:28.895303    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.905971    8853 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 05:01:28.905997    8853 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.906059    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 05:01:28.915792    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 05:01:28.915928    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:01:28.917423    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 05:01:28.917441    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 05:01:28.965758    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 05:01:28.994926    8853 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 05:01:28.994951    8853 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 05:01:28.995022    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 05:01:29.020500    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:01:29.020643    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 05:01:29.023727    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 05:01:29.023750    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W1007 05:01:29.033311    8853 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:01:29.033470    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.037185    8853 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 05:01:29.037204    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1007 05:01:29.046162    8853 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:01:29.046305    8853 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.059251    8853 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 05:01:29.059277    8853 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.059340    8853 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:01:29.101535    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1007 05:01:29.105124    8853 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 05:01:29.105148    8853 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.105223    8853 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:01:29.113557    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 05:01:29.113713    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:01:29.137805    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 05:01:29.137842    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 05:01:29.140157    8853 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:01:29.140291    8853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:01:29.161642    8853 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 05:01:29.161678    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 05:01:29.252020    8853 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:01:29.252033    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 05:01:29.310474    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 05:01:29.310523    8853 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:01:29.310530    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 05:01:29.624290    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 05:01:29.624312    8853 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:01:29.624319    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 05:01:29.769857    8853 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 05:01:29.769901    8853 cache_images.go:92] duration metric: took 1.5721495s to LoadCachedImages
	W1007 05:01:29.769943    8853 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1007 05:01:29.769949    8853 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 05:01:29.770000    8853 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-013000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 05:01:29.770088    8853 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 05:01:29.783704    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:01:29.783719    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:01:29.783728    8853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 05:01:29.783738    8853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-013000 NodeName:stopped-upgrade-013000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 05:01:29.783807    8853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-013000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 05:01:29.783886    8853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 05:01:29.787187    8853 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 05:01:29.787230    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 05:01:29.790256    8853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 05:01:29.795314    8853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 05:01:29.800385    8853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 05:01:29.805662    8853 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 05:01:29.806884    8853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:01:29.810794    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:01:29.890059    8853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:01:29.896813    8853 certs.go:68] Setting up /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000 for IP: 10.0.2.15
	I1007 05:01:29.896822    8853 certs.go:194] generating shared ca certs ...
	I1007 05:01:29.896835    8853 certs.go:226] acquiring lock for ca certs: {Name:mk64252dad53b4f3a87f635894b143f083e9f2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.897023    8853 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key
	I1007 05:01:29.897096    8853 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key
	I1007 05:01:29.897105    8853 certs.go:256] generating profile certs ...
	I1007 05:01:29.897193    8853 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key
	I1007 05:01:29.897210    8853 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c
	I1007 05:01:29.897221    8853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 05:01:29.989073    8853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c ...
	I1007 05:01:29.989088    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c: {Name:mkf812314bd83bfbea46a9b7eb7076846ede5d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.989591    8853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c ...
	I1007 05:01:29.989601    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c: {Name:mkd259e9af840cb8b5cfd8c70623cba409e3615b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:29.989755    8853 certs.go:381] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt.8988d64c -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt
	I1007 05:01:29.989887    8853 certs.go:385] copying /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key.8988d64c -> /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key
	I1007 05:01:29.990067    8853 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.key
	I1007 05:01:29.990217    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem (1338 bytes)
	W1007 05:01:29.990252    8853 certs.go:480] ignoring /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750_empty.pem, impossibly tiny 0 bytes
	I1007 05:01:29.990258    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 05:01:29.990291    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem (1082 bytes)
	I1007 05:01:29.990323    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem (1123 bytes)
	I1007 05:01:29.990354    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/key.pem (1679 bytes)
	I1007 05:01:29.990416    8853 certs.go:484] found cert: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem (1708 bytes)
	I1007 05:01:29.990775    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 05:01:29.998091    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 05:01:30.004976    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 05:01:30.011868    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 05:01:30.018920    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 05:01:30.026353    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 05:01:30.033647    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 05:01:30.040802    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 05:01:30.047857    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/ssl/certs/67502.pem --> /usr/share/ca-certificates/67502.pem (1708 bytes)
	I1007 05:01:30.054783    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 05:01:30.062113    8853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/6750.pem --> /usr/share/ca-certificates/6750.pem (1338 bytes)
	I1007 05:01:30.069504    8853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 05:01:30.074736    8853 ssh_runner.go:195] Run: openssl version
	I1007 05:01:30.076723    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67502.pem && ln -fs /usr/share/ca-certificates/67502.pem /etc/ssl/certs/67502.pem"
	I1007 05:01:30.079613    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.081005    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:45 /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.081034    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67502.pem
	I1007 05:01:30.082753    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67502.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 05:01:30.085731    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 05:01:30.088683    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.090222    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.090259    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:01:30.092226    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 05:01:30.095194    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6750.pem && ln -fs /usr/share/ca-certificates/6750.pem /etc/ssl/certs/6750.pem"
	I1007 05:01:30.098659    8853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.100216    8853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:45 /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.100240    8853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6750.pem
	I1007 05:01:30.101989    8853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6750.pem /etc/ssl/certs/51391683.0"
	I1007 05:01:30.105383    8853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 05:01:30.106858    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 05:01:30.108960    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 05:01:30.110825    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 05:01:30.112829    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 05:01:30.114623    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 05:01:30.116525    8853 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 05:01:30.118418    8853 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:01:30.118495    8853 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:01:30.128741    8853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 05:01:30.131775    8853 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 05:01:30.131783    8853 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 05:01:30.131812    8853 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 05:01:30.134933    8853 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:01:30.135286    8853 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-013000" does not appear in /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:01:30.135386    8853 kubeconfig.go:62] /Users/jenkins/minikube-integration/19763-6232/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-013000" cluster setting kubeconfig missing "stopped-upgrade-013000" context setting]
	I1007 05:01:30.135583    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:01:30.136061    8853 kapi.go:59] client config for stopped-upgrade-013000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104147ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:01:30.136415    8853 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 05:01:30.139208    8853 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-013000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 05:01:30.139214    8853 kubeadm.go:1160] stopping kube-system containers ...
	I1007 05:01:30.139265    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:01:30.150472    8853 docker.go:483] Stopping containers: [d5ac2d0f9779 fa15598b25e6 023cc649d91f eb90044e46b6 0e9d10ca462d b8fc485885e4 483f390e6c19 0d30b4d058f2]
	I1007 05:01:30.150544    8853 ssh_runner.go:195] Run: docker stop d5ac2d0f9779 fa15598b25e6 023cc649d91f eb90044e46b6 0e9d10ca462d b8fc485885e4 483f390e6c19 0d30b4d058f2
	I1007 05:01:30.161412    8853 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 05:01:30.166850    8853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:01:30.169896    8853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:01:30.169901    8853 kubeadm.go:157] found existing configuration files:
	
	I1007 05:01:30.169933    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf
	I1007 05:01:30.172396    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:01:30.172429    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:01:30.174991    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf
	I1007 05:01:30.178061    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:01:30.178098    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:01:30.180939    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf
	I1007 05:01:30.183387    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:01:30.183417    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:01:30.186495    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf
	I1007 05:01:30.189472    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:01:30.189515    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:01:30.192269    8853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:01:30.195169    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.217373    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.802734    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.938622    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.964652    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:01:30.988440    8853 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:01:30.988532    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.490586    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.990641    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:01:31.994836    8853 api_server.go:72] duration metric: took 1.00640025s to wait for apiserver process to appear ...
	I1007 05:01:31.994849    8853 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:01:31.994858    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:36.996952    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:36.997011    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:41.997391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:41.997462    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:46.998008    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:46.998036    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:51.998592    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:51.998636    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:01:56.999497    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:01:56.999540    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:02.000535    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:02.000583    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:07.001819    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:07.001866    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:12.003607    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:12.003648    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:17.005695    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:17.005733    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:22.007984    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:22.008032    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:27.010324    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:27.010346    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:32.011627    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:32.011852    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:32.028106    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:32.028195    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:32.041153    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:32.041238    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:32.052334    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:32.052410    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:32.062689    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:32.062765    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:32.072858    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:32.072930    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:32.083829    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:32.083910    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:32.094031    8853 logs.go:282] 0 containers: []
	W1007 05:02:32.094044    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:32.094118    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:32.108964    8853 logs.go:282] 0 containers: []
	W1007 05:02:32.108977    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:32.108993    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:32.108999    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:32.120383    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:32.120395    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:32.135582    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:32.135592    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:32.174827    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:32.174834    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:32.190299    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:32.190310    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:32.204375    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:32.204387    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:32.216392    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:32.216402    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:32.236844    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:32.236859    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:32.262057    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:32.262067    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:32.365280    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:32.365291    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:32.381068    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:32.381080    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:32.395668    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:32.395680    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:32.411384    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:32.411398    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:32.423630    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:32.423643    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:32.427916    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:32.427923    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:34.957509    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:39.959759    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:39.959948    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:39.979701    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:39.979812    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:39.993818    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:39.993912    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:40.005796    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:40.005877    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:40.016381    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:40.016459    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:40.031628    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:40.031706    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:40.042930    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:40.043017    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:40.053531    8853 logs.go:282] 0 containers: []
	W1007 05:02:40.053541    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:40.053605    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:40.063731    8853 logs.go:282] 0 containers: []
	W1007 05:02:40.063744    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:40.063751    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:40.063756    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:40.079559    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:40.079570    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:40.094502    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:40.094512    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:40.105828    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:40.105842    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:40.117897    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:40.117907    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:40.131557    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:40.131566    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:40.170749    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:40.170756    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:40.207661    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:40.207670    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:40.223120    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:40.223128    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:40.247739    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:40.247748    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:40.264951    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:40.264962    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:40.278846    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:40.278855    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:40.302691    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:40.302699    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:40.306620    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:40.306626    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:40.318546    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:40.318556    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:42.835413    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:47.836684    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:47.836788    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:47.849286    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:47.849372    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:47.861519    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:47.861601    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:47.872955    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:47.873033    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:47.884236    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:47.884328    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:47.899116    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:47.899211    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:47.911252    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:47.911335    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:47.923067    8853 logs.go:282] 0 containers: []
	W1007 05:02:47.923080    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:47.923152    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:47.934883    8853 logs.go:282] 0 containers: []
	W1007 05:02:47.934898    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:47.934906    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:47.934912    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:47.961686    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:47.961708    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:47.974541    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:47.974553    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:47.987952    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:47.987970    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:48.007457    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:48.007470    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:48.012304    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:48.012316    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:48.026821    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:48.026835    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:48.041802    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:48.041817    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:48.054061    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:48.054074    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:48.093614    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:48.093623    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:48.132850    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:48.132869    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:48.148137    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:48.148148    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:48.160713    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:48.160725    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:48.176178    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:48.176192    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:48.190253    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:48.190264    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:50.717669    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:02:55.719853    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:02:55.720090    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:02:55.749407    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:02:55.749516    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:02:55.765117    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:02:55.765208    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:02:55.777549    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:02:55.777630    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:02:55.789582    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:02:55.789666    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:02:55.800097    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:02:55.800173    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:02:55.811053    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:02:55.811133    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:02:55.821285    8853 logs.go:282] 0 containers: []
	W1007 05:02:55.821296    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:02:55.821360    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:02:55.835050    8853 logs.go:282] 0 containers: []
	W1007 05:02:55.835061    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:02:55.835069    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:02:55.835075    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:02:55.848676    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:02:55.848685    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:02:55.860139    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:02:55.860151    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:02:55.875261    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:02:55.875275    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:02:55.890280    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:02:55.890290    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:02:55.907708    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:02:55.907719    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:02:55.948642    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:02:55.948649    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:02:55.962798    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:02:55.962808    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:02:55.988268    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:02:55.988276    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:02:56.015571    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:02:56.015583    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:02:56.050844    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:02:56.050856    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:02:56.069617    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:02:56.069626    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:02:56.081511    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:02:56.081521    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:02:56.099392    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:02:56.099408    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:02:56.104358    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:02:56.104367    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:02:58.621261    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:03.621688    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:03.622008    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:03.649397    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:03.649534    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:03.667025    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:03.667131    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:03.681274    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:03.681349    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:03.693506    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:03.693589    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:03.705373    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:03.705452    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:03.716350    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:03.716425    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:03.726792    8853 logs.go:282] 0 containers: []
	W1007 05:03:03.726806    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:03.726874    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:03.737025    8853 logs.go:282] 0 containers: []
	W1007 05:03:03.737037    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:03.737045    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:03.737051    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:03.751262    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:03.751278    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:03.770607    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:03.770616    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:03.782183    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:03.782192    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:03.818063    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:03.818074    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:03.832876    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:03.832887    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:03.845194    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:03.845204    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:03.860081    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:03.860093    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:03.874442    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:03.874453    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:03.879323    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:03.879333    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:03.905177    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:03.905190    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:03.931127    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:03.931143    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:03.942990    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:03.943000    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:03.983298    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:03.983308    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:04.009041    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:04.009049    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:06.535588    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:11.537030    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:11.537580    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:11.573329    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:11.573487    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:11.595035    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:11.595153    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:11.615540    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:11.615621    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:11.627028    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:11.627112    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:11.638084    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:11.638167    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:11.652286    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:11.652369    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:11.663395    8853 logs.go:282] 0 containers: []
	W1007 05:03:11.663411    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:11.663481    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:11.676413    8853 logs.go:282] 0 containers: []
	W1007 05:03:11.676425    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:11.676432    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:11.676438    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:11.702079    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:11.702088    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:11.714321    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:11.714333    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:11.728866    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:11.728880    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:11.767790    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:11.767799    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:11.782073    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:11.782084    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:11.794170    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:11.794184    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:11.809964    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:11.809976    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:11.825064    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:11.825076    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:11.843090    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:11.843105    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:11.855272    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:11.855288    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:11.859448    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:11.859455    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:11.894864    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:11.894878    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:11.916500    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:11.916514    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:11.928195    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:11.928206    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:14.455155    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:19.457861    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:19.458136    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:19.482583    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:19.482694    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:19.507005    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:19.507093    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:19.518036    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:19.518109    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:19.528525    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:19.528606    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:19.538800    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:19.538881    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:19.549672    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:19.549746    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:19.560499    8853 logs.go:282] 0 containers: []
	W1007 05:03:19.560512    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:19.560582    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:19.571086    8853 logs.go:282] 0 containers: []
	W1007 05:03:19.571097    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:19.571107    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:19.571112    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:19.596528    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:19.596538    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:19.608170    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:19.608184    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:19.620319    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:19.620329    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:19.633095    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:19.633106    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:19.647425    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:19.647434    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:19.661150    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:19.661161    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:19.676121    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:19.676129    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:19.693455    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:19.693466    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:19.708262    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:19.708271    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:19.746875    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:19.746890    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:19.762756    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:19.762771    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:19.785880    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:19.785886    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:19.824614    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:19.824625    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:19.829440    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:19.829447    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:22.346019    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:27.347791    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:27.347970    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:27.364596    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:27.364682    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:27.376262    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:27.376342    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:27.386614    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:27.386685    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:27.398017    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:27.398095    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:27.408736    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:27.408808    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:27.419395    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:27.419478    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:27.429603    8853 logs.go:282] 0 containers: []
	W1007 05:03:27.429618    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:27.429685    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:27.439923    8853 logs.go:282] 0 containers: []
	W1007 05:03:27.439934    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:27.439943    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:27.439949    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:27.452491    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:27.452503    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:27.456951    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:27.456958    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:27.468701    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:27.468711    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:27.480370    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:27.480380    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:27.497717    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:27.497726    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:27.510850    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:27.510859    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:27.534176    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:27.534186    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:27.569936    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:27.569947    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:27.583825    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:27.583837    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:27.599335    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:27.599346    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:27.610856    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:27.610868    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:27.626095    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:27.626105    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:27.663541    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:27.663550    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:27.677270    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:27.677284    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:30.204691    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:35.206950    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:35.207065    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:35.217975    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:35.218043    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:35.229808    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:35.229888    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:35.240958    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:35.241037    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:35.251482    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:35.251554    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:35.262384    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:35.262460    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:35.272873    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:35.272957    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:35.282742    8853 logs.go:282] 0 containers: []
	W1007 05:03:35.282755    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:35.282816    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:35.293229    8853 logs.go:282] 0 containers: []
	W1007 05:03:35.293241    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:35.293251    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:35.293257    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:35.307168    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:35.307183    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:35.319605    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:35.319616    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:35.344506    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:35.344514    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:35.348826    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:35.348834    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:35.383776    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:35.383785    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:35.395782    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:35.395791    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:35.413863    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:35.413875    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:35.428084    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:35.428096    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:35.442195    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:35.442205    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:35.456822    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:35.456831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:35.468445    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:35.468457    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:35.483213    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:35.483223    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:35.520521    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:35.520530    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:35.544956    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:35.544968    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:38.058980    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:43.060676    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:43.060827    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:43.073519    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:43.073603    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:43.084555    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:43.084652    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:43.095339    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:43.095423    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:43.107828    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:43.107909    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:43.123806    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:43.123880    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:43.133952    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:43.134014    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:43.144183    8853 logs.go:282] 0 containers: []
	W1007 05:03:43.144195    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:43.144259    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:43.154001    8853 logs.go:282] 0 containers: []
	W1007 05:03:43.154015    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:43.154022    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:43.154029    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:43.168915    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:43.168925    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:43.193982    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:43.193994    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:43.198610    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:43.198618    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:43.212889    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:43.212899    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:43.227473    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:43.227482    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:43.238855    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:43.238866    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:43.252627    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:43.252641    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:43.291526    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:43.291533    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:43.325438    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:43.325448    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:43.339687    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:43.339697    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:43.356602    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:43.356611    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:43.369471    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:43.369483    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:43.394631    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:43.394640    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:43.406242    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:43.406254    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:45.920187    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:50.922518    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:50.922646    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:50.933665    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:50.933734    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:50.944652    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:50.944736    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:50.955347    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:50.955431    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:50.965754    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:50.965834    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:50.979371    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:50.979451    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:50.989951    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:50.990019    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:51.000462    8853 logs.go:282] 0 containers: []
	W1007 05:03:51.000475    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:51.000551    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:51.011237    8853 logs.go:282] 0 containers: []
	W1007 05:03:51.011249    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:51.011257    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:51.011262    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:51.024934    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:51.024944    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:51.041499    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:51.041522    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:51.053973    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:51.053985    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:51.084912    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:51.084927    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:51.096611    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:51.096620    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:51.108531    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:51.108542    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:51.125918    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:51.125928    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:51.139586    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:51.139597    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:51.143625    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:51.143636    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:51.158790    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:51.158801    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:51.172622    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:51.172634    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:03:51.184209    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:51.184219    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:51.207588    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:51.207601    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:51.247720    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:51.247729    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:53.784167    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:03:58.786490    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:03:58.786639    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:03:58.797457    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:03:58.797544    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:03:58.808252    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:03:58.808332    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:03:58.819048    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:03:58.819129    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:03:58.829501    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:03:58.829574    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:03:58.839871    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:03:58.839949    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:03:58.850624    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:03:58.850700    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:03:58.860914    8853 logs.go:282] 0 containers: []
	W1007 05:03:58.860925    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:03:58.860985    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:03:58.871190    8853 logs.go:282] 0 containers: []
	W1007 05:03:58.871205    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:03:58.871212    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:03:58.871217    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:03:58.911184    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:03:58.911195    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:03:58.925687    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:03:58.925698    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:03:58.951763    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:03:58.951774    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:03:58.963216    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:03:58.963232    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:03:58.977209    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:03:58.977223    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:03:59.019974    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:03:59.019989    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:03:59.024152    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:03:59.024158    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:03:59.038664    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:03:59.038674    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:03:59.050566    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:03:59.050577    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:03:59.062670    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:03:59.062685    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:03:59.076550    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:03:59.076564    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:03:59.093651    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:03:59.093667    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:03:59.118920    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:03:59.118931    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:03:59.132784    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:03:59.132798    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:01.646787    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:06.647252    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:06.647435    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:06.660252    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:06.660331    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:06.672285    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:06.672368    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:06.682863    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:06.682945    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:06.694370    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:06.694450    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:06.705288    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:06.705357    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:06.715658    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:06.715736    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:06.726213    8853 logs.go:282] 0 containers: []
	W1007 05:04:06.726225    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:06.726298    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:06.740251    8853 logs.go:282] 0 containers: []
	W1007 05:04:06.740262    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:06.740271    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:06.740276    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:06.779588    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:06.779595    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:06.783735    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:06.783744    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:06.817684    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:06.817694    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:06.846356    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:06.846367    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:06.860879    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:06.860896    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:06.874645    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:06.874660    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:06.886898    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:06.886909    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:06.902198    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:06.902208    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:06.913423    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:06.913436    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:06.925892    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:06.925902    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:06.937185    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:06.937196    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:06.956027    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:06.956037    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:06.971114    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:06.971123    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:06.988815    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:06.988829    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:09.514143    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:14.516391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:14.516558    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:14.528766    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:14.528844    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:14.539089    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:14.539166    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:14.548979    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:14.549061    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:14.559497    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:14.559576    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:14.573030    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:14.573106    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:14.584028    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:14.584106    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:14.594660    8853 logs.go:282] 0 containers: []
	W1007 05:04:14.594672    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:14.594737    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:14.605101    8853 logs.go:282] 0 containers: []
	W1007 05:04:14.605115    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:14.605122    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:14.605128    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:14.625683    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:14.625693    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:14.643593    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:14.643604    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:14.656469    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:14.656482    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:14.684162    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:14.684181    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:14.698445    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:14.698456    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:14.710033    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:14.710046    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:14.723943    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:14.723958    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:14.749455    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:14.749462    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:14.786938    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:14.786945    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:14.821785    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:14.821796    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:14.837027    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:14.837040    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:14.841898    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:14.841906    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:14.856873    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:14.856884    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:14.869072    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:14.869083    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:17.385378    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:22.387661    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:22.387827    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:22.404039    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:22.404134    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:22.416835    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:22.416917    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:22.427564    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:22.427641    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:22.438337    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:22.438416    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:22.452429    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:22.452505    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:22.467733    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:22.467809    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:22.477593    8853 logs.go:282] 0 containers: []
	W1007 05:04:22.477606    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:22.477672    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:22.487518    8853 logs.go:282] 0 containers: []
	W1007 05:04:22.487534    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:22.487544    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:22.487550    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:22.527276    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:22.527285    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:22.544783    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:22.544794    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:22.556367    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:22.556378    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:22.572927    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:22.572941    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:22.590129    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:22.590141    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:22.615090    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:22.615098    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:22.652478    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:22.652489    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:22.669106    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:22.669120    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:22.683903    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:22.683912    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:22.701841    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:22.701857    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:22.715858    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:22.715874    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:22.720428    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:22.720435    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:22.746221    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:22.746233    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:22.762316    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:22.762329    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:25.275909    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:30.278214    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:30.278379    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:30.289542    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:30.289636    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:30.300610    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:30.300689    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:30.311966    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:30.312043    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:30.325315    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:30.325397    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:30.339298    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:30.339378    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:30.350261    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:30.350341    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:30.360592    8853 logs.go:282] 0 containers: []
	W1007 05:04:30.360604    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:30.360679    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:30.371125    8853 logs.go:282] 0 containers: []
	W1007 05:04:30.371136    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:30.371165    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:30.371170    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:30.375381    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:30.375390    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:30.389923    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:30.389937    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:30.401900    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:30.401911    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:30.425680    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:30.425690    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:30.451453    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:30.451467    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:30.465713    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:30.465728    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:30.483856    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:30.483868    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:30.496390    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:30.496406    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:30.531135    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:30.531150    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:30.547231    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:30.547244    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:30.560984    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:30.560997    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:30.601074    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:30.601089    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:30.616979    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:30.616991    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:30.636138    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:30.636150    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:33.158345    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:38.160727    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:38.160909    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:38.176303    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:38.176395    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:38.188914    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:38.188993    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:38.202417    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:38.202484    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:38.214849    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:38.214929    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:38.224886    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:38.224967    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:38.235289    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:38.235374    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:38.245673    8853 logs.go:282] 0 containers: []
	W1007 05:04:38.245686    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:38.245751    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:38.261626    8853 logs.go:282] 0 containers: []
	W1007 05:04:38.261639    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:38.261648    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:38.261653    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:38.285978    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:38.285986    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:38.299002    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:38.299013    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:38.310408    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:38.310418    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:38.323875    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:38.323885    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:38.335881    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:38.335892    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:38.340428    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:38.340435    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:38.374560    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:38.374575    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:38.388779    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:38.388789    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:38.402755    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:38.402764    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:38.416855    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:38.416865    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:38.429466    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:38.429483    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:38.444400    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:38.444410    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:38.465249    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:38.465262    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:38.506476    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:38.506485    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:41.033685    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:46.034344    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:46.034526    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:46.051683    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:46.051779    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:46.066663    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:46.066752    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:46.079099    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:46.079181    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:46.089823    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:46.089904    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:46.100342    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:46.100419    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:46.110899    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:46.110980    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:46.127565    8853 logs.go:282] 0 containers: []
	W1007 05:04:46.127577    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:46.127649    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:46.138126    8853 logs.go:282] 0 containers: []
	W1007 05:04:46.138137    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:46.138146    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:46.138152    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:46.155520    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:46.155530    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:46.166831    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:46.166840    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:46.178184    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:46.178198    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:46.195354    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:46.195366    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:46.231975    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:46.231989    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:46.247050    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:46.247061    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:46.262259    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:46.262273    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:46.284288    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:46.284296    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:46.296235    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:46.296245    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:46.334853    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:46.334864    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:46.349771    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:46.349783    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:46.354152    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:46.354159    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:46.378776    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:46.378787    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:46.394030    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:46.394042    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:48.909496    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:04:53.910291    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:04:53.910408    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:04:53.923294    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:04:53.923378    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:04:53.935502    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:04:53.935575    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:04:53.953916    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:04:53.953995    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:04:53.964003    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:04:53.964075    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:04:53.976014    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:04:53.976089    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:04:53.986441    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:04:53.986520    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:04:53.996589    8853 logs.go:282] 0 containers: []
	W1007 05:04:53.996601    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:04:53.996664    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:04:54.007033    8853 logs.go:282] 0 containers: []
	W1007 05:04:54.007045    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:04:54.007052    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:04:54.007059    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:04:54.047435    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:04:54.047450    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:04:54.072536    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:04:54.072548    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:04:54.097819    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:04:54.097831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:04:54.113349    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:04:54.113362    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:04:54.136901    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:04:54.136908    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:04:54.148515    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:04:54.148526    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:04:54.160513    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:04:54.160525    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:04:54.178445    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:04:54.178458    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:04:54.196478    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:04:54.196491    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:04:54.200821    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:04:54.200828    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:04:54.235819    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:04:54.235831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:04:54.250557    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:04:54.250572    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:04:54.267959    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:04:54.267968    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:04:54.287042    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:04:54.287051    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:04:56.803981    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:01.805375    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:01.805488    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:01.822095    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:01.822185    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:01.832843    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:01.832920    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:01.843264    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:01.843341    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:01.854001    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:01.854071    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:01.867913    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:01.867995    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:01.882771    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:01.882844    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:01.894906    8853 logs.go:282] 0 containers: []
	W1007 05:05:01.894918    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:01.894979    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:01.905490    8853 logs.go:282] 0 containers: []
	W1007 05:05:01.905500    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:01.905508    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:01.905513    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:01.930038    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:01.930049    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:01.944390    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:01.944405    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:01.959402    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:01.959411    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:01.976548    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:01.976559    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:01.990726    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:01.990737    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:01.995005    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:01.995011    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:02.009372    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:02.009381    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:02.024744    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:02.024754    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:02.064612    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:02.064622    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:02.076781    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:02.076790    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:02.112676    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:02.112688    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:02.126904    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:02.126917    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:02.142539    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:02.142549    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:02.166735    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:02.166742    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:04.680597    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:09.682904    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:09.683109    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:09.700514    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:09.700605    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:09.713474    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:09.713565    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:09.724384    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:09.724461    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:09.735205    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:09.735288    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:09.745297    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:09.745383    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:09.755913    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:09.755986    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:09.766237    8853 logs.go:282] 0 containers: []
	W1007 05:05:09.766248    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:09.766309    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:09.776621    8853 logs.go:282] 0 containers: []
	W1007 05:05:09.776631    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:09.776638    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:09.776644    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:09.792564    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:09.792576    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:09.810459    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:09.810470    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:09.827101    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:09.827112    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:09.838687    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:09.838698    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:09.877653    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:09.877662    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:09.892079    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:09.892090    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:09.916446    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:09.916461    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:09.928235    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:09.928245    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:09.942829    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:09.942840    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:09.954907    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:09.954925    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:09.969056    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:09.969065    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:09.980972    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:09.980981    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:10.003927    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:10.003934    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:10.007986    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:10.007993    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:12.545203    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:17.547455    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:17.547563    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:17.558834    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:17.558918    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:17.570797    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:17.570873    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:17.582245    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:17.582334    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:17.594080    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:17.594163    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:17.604485    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:17.604569    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:17.615393    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:17.615465    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:17.625531    8853 logs.go:282] 0 containers: []
	W1007 05:05:17.625543    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:17.625612    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:17.636308    8853 logs.go:282] 0 containers: []
	W1007 05:05:17.636317    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:17.636324    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:17.636330    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:17.650265    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:17.650280    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:17.664391    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:17.664400    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:17.681308    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:17.681318    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:17.695089    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:17.695100    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:17.706937    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:17.706948    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:17.745623    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:17.745635    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:17.780559    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:17.780570    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:17.792068    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:17.792081    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:17.804506    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:17.804521    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:17.827283    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:17.827291    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:17.831322    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:17.831332    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:17.846305    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:17.846320    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:17.871994    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:17.872005    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:17.883648    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:17.883660    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:20.401093    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:25.403389    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:25.403592    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:05:25.439164    8853 logs.go:282] 2 containers: [725ddad58d12 eb90044e46b6]
	I1007 05:05:25.439254    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:05:25.458038    8853 logs.go:282] 2 containers: [9e9c07519fe4 fa15598b25e6]
	I1007 05:05:25.458124    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:05:25.468386    8853 logs.go:282] 1 containers: [6b9e066fe67c]
	I1007 05:05:25.468462    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:05:25.486417    8853 logs.go:282] 2 containers: [b875ccf80e7d 023cc649d91f]
	I1007 05:05:25.486500    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:05:25.496895    8853 logs.go:282] 1 containers: [559bb1b4f060]
	I1007 05:05:25.496978    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:05:25.507463    8853 logs.go:282] 2 containers: [2ded42b2c676 d5ac2d0f9779]
	I1007 05:05:25.507540    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:05:25.517733    8853 logs.go:282] 0 containers: []
	W1007 05:05:25.517742    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:05:25.517801    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:05:25.528027    8853 logs.go:282] 0 containers: []
	W1007 05:05:25.528040    8853 logs.go:284] No container was found matching "storage-provisioner"
	I1007 05:05:25.528049    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:05:25.528054    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:05:25.564723    8853 logs.go:123] Gathering logs for etcd [fa15598b25e6] ...
	I1007 05:05:25.564733    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa15598b25e6"
	I1007 05:05:25.579465    8853 logs.go:123] Gathering logs for coredns [6b9e066fe67c] ...
	I1007 05:05:25.579476    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9e066fe67c"
	I1007 05:05:25.591184    8853 logs.go:123] Gathering logs for kube-scheduler [023cc649d91f] ...
	I1007 05:05:25.591195    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 023cc649d91f"
	I1007 05:05:25.615711    8853 logs.go:123] Gathering logs for kube-controller-manager [2ded42b2c676] ...
	I1007 05:05:25.615722    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ded42b2c676"
	I1007 05:05:25.633830    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:05:25.633840    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:05:25.656414    8853 logs.go:123] Gathering logs for kube-apiserver [eb90044e46b6] ...
	I1007 05:05:25.656421    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb90044e46b6"
	I1007 05:05:25.680618    8853 logs.go:123] Gathering logs for kube-controller-manager [d5ac2d0f9779] ...
	I1007 05:05:25.680628    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ac2d0f9779"
	I1007 05:05:25.694512    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:05:25.694527    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:05:25.698769    8853 logs.go:123] Gathering logs for kube-apiserver [725ddad58d12] ...
	I1007 05:05:25.698776    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 725ddad58d12"
	I1007 05:05:25.712722    8853 logs.go:123] Gathering logs for etcd [9e9c07519fe4] ...
	I1007 05:05:25.712732    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9c07519fe4"
	I1007 05:05:25.728706    8853 logs.go:123] Gathering logs for kube-scheduler [b875ccf80e7d] ...
	I1007 05:05:25.728717    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b875ccf80e7d"
	I1007 05:05:25.740345    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:05:25.740356    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:05:25.783316    8853 logs.go:123] Gathering logs for kube-proxy [559bb1b4f060] ...
	I1007 05:05:25.783329    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 559bb1b4f060"
	I1007 05:05:25.795236    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:05:25.795250    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:05:28.308951    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:33.311191    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:33.311269    8853 kubeadm.go:597] duration metric: took 4m3.18020125s to restartPrimaryControlPlane
	W1007 05:05:33.311331    8853 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:05:33.311356    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:05:34.250392    8853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:05:34.255564    8853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:05:34.258676    8853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:05:34.261578    8853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:05:34.261583    8853 kubeadm.go:157] found existing configuration files:
	
	I1007 05:05:34.261616    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf
	I1007 05:05:34.264722    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:05:34.264749    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:05:34.268159    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf
	I1007 05:05:34.271254    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:05:34.271289    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:05:34.273875    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf
	I1007 05:05:34.276955    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:05:34.276988    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:05:34.280282    8853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf
	I1007 05:05:34.282877    8853 kubeadm.go:163] "https://control-plane.minikube.internal:51484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:05:34.282906    8853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:05:34.285620    8853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:05:34.302727    8853 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:05:34.302825    8853 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:05:34.351290    8853 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:05:34.351344    8853 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:05:34.351402    8853 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:05:34.401810    8853 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:05:34.405082    8853 out.go:235]   - Generating certificates and keys ...
	I1007 05:05:34.405115    8853 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:05:34.405141    8853 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:05:34.405184    8853 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:05:34.405288    8853 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:05:34.405322    8853 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:05:34.405352    8853 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:05:34.405420    8853 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:05:34.405461    8853 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:05:34.405498    8853 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:05:34.405533    8853 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:05:34.405590    8853 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:05:34.405654    8853 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:05:34.542118    8853 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:05:34.652636    8853 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:05:34.715263    8853 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:05:34.790620    8853 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:05:34.821219    8853 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:05:34.821568    8853 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:05:34.821629    8853 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:05:34.915201    8853 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:05:34.919587    8853 out.go:235]   - Booting up control plane ...
	I1007 05:05:34.919641    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:05:34.919688    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:05:34.919719    8853 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:05:34.919757    8853 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:05:34.919843    8853 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:05:38.921722    8853 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001660 seconds
	I1007 05:05:38.921805    8853 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:05:38.925064    8853 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:05:39.443359    8853 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:05:39.443662    8853 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-013000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:05:39.948423    8853 kubeadm.go:310] [bootstrap-token] Using token: wgoviq.b6o62yjruw2arzai
	I1007 05:05:39.954757    8853 out.go:235]   - Configuring RBAC rules ...
	I1007 05:05:39.954814    8853 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:05:39.954850    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:05:39.962158    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:05:39.963148    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:05:39.964046    8853 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:05:39.965017    8853 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:05:39.968278    8853 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:05:40.133868    8853 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:05:40.352944    8853 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:05:40.353360    8853 kubeadm.go:310] 
	I1007 05:05:40.353388    8853 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:05:40.353397    8853 kubeadm.go:310] 
	I1007 05:05:40.353441    8853 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:05:40.353444    8853 kubeadm.go:310] 
	I1007 05:05:40.353463    8853 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:05:40.353495    8853 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:05:40.353523    8853 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:05:40.353528    8853 kubeadm.go:310] 
	I1007 05:05:40.353560    8853 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:05:40.353564    8853 kubeadm.go:310] 
	I1007 05:05:40.353586    8853 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:05:40.353590    8853 kubeadm.go:310] 
	I1007 05:05:40.353614    8853 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:05:40.353659    8853 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:05:40.353699    8853 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:05:40.353705    8853 kubeadm.go:310] 
	I1007 05:05:40.353742    8853 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:05:40.353781    8853 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:05:40.353784    8853 kubeadm.go:310] 
	I1007 05:05:40.353831    8853 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wgoviq.b6o62yjruw2arzai \
	I1007 05:05:40.353885    8853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 \
	I1007 05:05:40.353896    8853 kubeadm.go:310] 	--control-plane 
	I1007 05:05:40.353899    8853 kubeadm.go:310] 
	I1007 05:05:40.353950    8853 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:05:40.353953    8853 kubeadm.go:310] 
	I1007 05:05:40.353995    8853 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wgoviq.b6o62yjruw2arzai \
	I1007 05:05:40.354044    8853 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:febb875d4bbf06b7ad7d82e30b7a025b625ed533ad612094771c483b780a68f5 
	I1007 05:05:40.354199    8853 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:05:40.354291    8853 cni.go:84] Creating CNI manager for ""
	I1007 05:05:40.354301    8853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:05:40.358500    8853 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:05:40.368537    8853 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:05:40.371777    8853 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:05:40.376412    8853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:05:40.376468    8853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:05:40.376504    8853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-013000 minikube.k8s.io/updated_at=2024_10_07T05_05_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=stopped-upgrade-013000 minikube.k8s.io/primary=true
	I1007 05:05:40.413726    8853 kubeadm.go:1113] duration metric: took 37.304ms to wait for elevateKubeSystemPrivileges
	I1007 05:05:40.413747    8853 ops.go:34] apiserver oom_adj: -16
	I1007 05:05:40.413753    8853 kubeadm.go:394] duration metric: took 4m10.296082s to StartCluster
	I1007 05:05:40.413763    8853 settings.go:142] acquiring lock: {Name:mk5872a0c73b3208924793fa59bf550628bdf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:05:40.413836    8853 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:05:40.414274    8853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/kubeconfig: {Name:mk4c5026c1645f877740c1904a5f1050530a5193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:05:40.414476    8853 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:05:40.414575    8853 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:05:40.414553    8853 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:05:40.414596    8853 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-013000"
	I1007 05:05:40.414602    8853 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-013000"
	I1007 05:05:40.414606    8853 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-013000"
	W1007 05:05:40.414606    8853 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:05:40.414617    8853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-013000"
	I1007 05:05:40.414629    8853 host.go:66] Checking if "stopped-upgrade-013000" exists ...
	I1007 05:05:40.420493    8853 out.go:177] * Verifying Kubernetes components...
	I1007 05:05:40.421171    8853 kapi.go:59] client config for stopped-upgrade-013000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/stopped-upgrade-013000/client.key", CAFile:"/Users/jenkins/minikube-integration/19763-6232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104147ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:05:40.426796    8853 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-013000"
	W1007 05:05:40.426801    8853 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:05:40.426808    8853 host.go:66] Checking if "stopped-upgrade-013000" exists ...
	I1007 05:05:40.427321    8853 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:05:40.427326    8853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:05:40.427330    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:05:40.432490    8853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:05:40.436507    8853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:05:40.440548    8853 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:05:40.440567    8853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:05:40.440584    8853 sshutil.go:53] new ssh client: &{IP:localhost Port:51449 SSHKeyPath:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/stopped-upgrade-013000/id_rsa Username:docker}
	I1007 05:05:40.538887    8853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:05:40.544084    8853 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:05:40.544170    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:05:40.548122    8853 api_server.go:72] duration metric: took 133.634583ms to wait for apiserver process to appear ...
	I1007 05:05:40.548132    8853 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:05:40.548140    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:40.569596    8853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:05:40.588589    8853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:05:40.949096    8853 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:05:40.949108    8853 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:05:45.548410    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:45.548471    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:50.548805    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:50.548826    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:05:55.550147    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:05:55.550171    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:00.550412    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:00.550458    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:05.550830    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:05.550869    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:10.551391    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:10.551415    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:06:10.951295    8853 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:06:10.954688    8853 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:06:10.968071    8853 addons.go:510] duration metric: took 30.553627916s for enable addons: enabled=[storage-provisioner]
	I1007 05:06:15.551959    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:15.552016    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:20.552769    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:20.552810    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:25.553777    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:25.553855    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:30.555068    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:30.555110    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:35.556601    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:35.556619    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:40.558416    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:40.558552    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:40.572958    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:06:40.573040    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:40.586641    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:06:40.586715    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:40.598313    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:06:40.598387    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:40.610094    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:06:40.610165    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:40.626054    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:06:40.626130    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:40.640842    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:06:40.640921    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:40.652053    8853 logs.go:282] 0 containers: []
	W1007 05:06:40.652065    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:40.652125    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:40.663319    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:06:40.663337    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:06:40.663342    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:06:40.678891    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:06:40.678902    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:06:40.693738    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:06:40.693748    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:06:40.708161    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:06:40.708172    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:06:40.724232    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:06:40.724242    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:06:40.746639    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:40.746651    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:40.771517    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:06:40.771525    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:40.790567    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:40.790579    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:40.828034    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:40.828043    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:40.832560    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:40.832567    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:40.868649    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:06:40.868663    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:06:40.882031    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:06:40.882042    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:06:40.894403    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:06:40.894418    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:06:43.408745    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:48.410961    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:48.411122    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:48.424069    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:06:48.424157    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:48.435299    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:06:48.435390    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:48.445888    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:06:48.445969    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:48.456149    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:06:48.456224    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:48.467014    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:06:48.467088    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:48.477975    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:06:48.478053    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:48.488457    8853 logs.go:282] 0 containers: []
	W1007 05:06:48.488469    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:48.488530    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:48.499502    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:06:48.499519    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:48.499525    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:48.541795    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:06:48.541812    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:06:48.557313    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:06:48.557328    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:06:48.572309    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:06:48.572319    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:06:48.583997    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:48.584009    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:48.619932    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:06:48.619946    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:06:48.631338    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:06:48.631349    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:06:48.647409    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:06:48.647425    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:06:48.663316    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:06:48.663327    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:06:48.675334    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:06:48.675349    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:06:48.696159    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:48.696177    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:48.722176    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:06:48.722185    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:48.734580    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:48.734590    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:51.241072    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:06:56.243383    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:06:56.243639    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:06:56.265297    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:06:56.265407    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:06:56.281026    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:06:56.281122    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:06:56.293459    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:06:56.293549    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:06:56.310341    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:06:56.310426    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:06:56.320704    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:06:56.320780    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:06:56.331480    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:06:56.331556    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:06:56.342037    8853 logs.go:282] 0 containers: []
	W1007 05:06:56.342048    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:06:56.342111    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:06:56.352432    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:06:56.352449    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:06:56.352455    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:06:56.367021    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:06:56.367032    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:06:56.384150    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:06:56.384161    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:06:56.395403    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:06:56.395413    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:06:56.410180    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:06:56.410192    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:06:56.421399    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:06:56.421410    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:06:56.456848    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:06:56.456859    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:06:56.471359    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:06:56.471374    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:06:56.486032    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:06:56.486042    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:06:56.497559    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:06:56.497570    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:06:56.521301    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:06:56.521317    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:06:56.533236    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:06:56.533244    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:06:56.567638    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:06:56.567646    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:06:59.074430    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:04.076743    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:04.076972    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:04.104821    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:04.104966    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:04.122743    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:04.122844    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:04.136363    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:04.136447    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:04.148219    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:04.148299    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:04.158687    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:04.158769    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:04.169298    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:04.169379    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:04.179311    8853 logs.go:282] 0 containers: []
	W1007 05:07:04.179320    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:04.179381    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:04.190696    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:04.190712    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:04.190719    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:04.229915    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:04.229929    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:04.244500    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:04.244510    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:04.259636    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:04.259647    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:04.275359    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:04.275369    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:04.287916    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:04.287926    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:04.299771    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:04.299783    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:04.337485    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:04.337493    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:04.341647    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:04.341656    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:04.355687    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:04.355697    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:04.367757    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:04.367768    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:04.385733    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:04.385745    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:04.397817    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:04.397827    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:06.923460    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:11.925842    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:11.926117    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:11.944001    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:11.944116    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:11.957868    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:11.957948    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:11.969199    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:11.969282    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:11.979595    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:11.979676    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:11.990019    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:11.990089    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:12.000719    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:12.000800    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:12.010896    8853 logs.go:282] 0 containers: []
	W1007 05:07:12.010909    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:12.010982    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:12.021282    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:12.021299    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:12.021306    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:12.038951    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:12.038963    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:12.052781    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:12.052790    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:12.064080    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:12.064094    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:12.081574    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:12.081588    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:12.105948    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:12.105954    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:12.116922    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:12.116934    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:12.151198    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:12.151209    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:12.190309    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:12.190322    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:12.205499    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:12.205510    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:12.224726    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:12.224735    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:12.236440    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:12.236454    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:12.240712    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:12.240719    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:14.754676    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:19.757452    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:19.758006    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:19.797372    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:19.797525    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:19.819119    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:19.819253    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:19.834318    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:19.834402    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:19.848398    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:19.848480    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:19.859793    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:19.859867    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:19.870332    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:19.870401    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:19.880439    8853 logs.go:282] 0 containers: []
	W1007 05:07:19.880452    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:19.880508    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:19.894767    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:19.894783    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:19.894788    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:19.906260    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:19.906270    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:19.918181    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:19.918192    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:19.954221    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:19.954228    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:19.958307    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:19.958316    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:19.972825    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:19.972834    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:19.984666    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:19.984680    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:19.996616    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:19.996629    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:20.008445    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:20.008456    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:20.042224    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:20.042238    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:20.056084    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:20.056095    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:20.073155    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:20.073166    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:20.090367    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:20.090377    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:22.619561    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:27.622444    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:27.623080    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:27.663511    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:27.663654    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:27.685892    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:27.686029    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:27.701076    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:27.701148    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:27.713865    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:27.713946    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:27.725110    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:27.725183    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:27.736369    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:27.736441    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:27.747206    8853 logs.go:282] 0 containers: []
	W1007 05:07:27.747218    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:27.747287    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:27.758938    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:27.758957    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:27.758964    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:27.764084    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:27.764095    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:27.802439    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:27.802450    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:27.819059    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:27.819072    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:27.831851    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:27.831862    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:27.850103    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:27.850113    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:27.861685    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:27.861694    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:27.897403    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:27.897412    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:27.911933    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:27.911946    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:27.923262    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:27.923273    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:27.935290    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:27.935302    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:27.950180    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:27.950190    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:27.961577    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:27.961592    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:30.486490    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:35.489163    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:35.489491    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:35.525396    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:35.525525    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:35.543214    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:35.543301    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:35.556752    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:35.556832    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:35.568455    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:35.568526    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:35.578960    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:35.579032    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:35.588964    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:35.589032    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:35.598912    8853 logs.go:282] 0 containers: []
	W1007 05:07:35.598924    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:35.598985    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:35.609395    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:35.609410    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:35.609415    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:35.643643    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:35.643655    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:35.648258    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:35.648266    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:35.663710    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:35.663720    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:35.675627    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:35.675642    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:35.693418    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:35.693428    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:35.704878    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:35.704888    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:35.716234    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:35.716243    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:35.755289    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:35.755299    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:35.771277    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:35.771287    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:35.785525    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:35.785534    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:35.806836    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:35.806851    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:35.822366    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:35.822376    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:38.350381    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:43.351359    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:43.351628    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:43.369610    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:43.369716    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:43.382903    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:43.382982    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:43.394490    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:43.394571    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:43.405029    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:43.405100    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:43.415258    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:43.415333    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:43.426250    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:43.426329    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:43.436196    8853 logs.go:282] 0 containers: []
	W1007 05:07:43.436208    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:43.436266    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:43.446289    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:43.446302    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:43.446306    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:43.457771    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:43.457786    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:43.462015    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:43.462023    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:43.475682    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:43.475695    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:43.487446    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:43.487462    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:43.502177    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:43.502187    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:43.519675    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:43.519687    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:43.531088    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:43.531097    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:43.554686    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:43.554694    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:43.588825    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:43.588834    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:43.623704    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:43.623713    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:43.638484    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:43.638493    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:43.650377    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:43.650388    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:46.164680    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:51.165926    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:51.166429    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:51.199181    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:51.199323    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:51.220140    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:51.220252    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:51.234422    8853 logs.go:282] 2 containers: [1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:51.234504    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:51.246332    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:51.246409    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:51.261325    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:51.261405    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:51.271831    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:51.271897    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:51.282258    8853 logs.go:282] 0 containers: []
	W1007 05:07:51.282271    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:51.282341    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:51.292860    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:51.292873    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:51.292878    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:51.297488    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:51.297495    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:51.332158    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:51.332171    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:51.347054    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:51.347066    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:51.361369    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:51.361381    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:51.374324    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:51.374340    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:51.389607    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:51.389618    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:51.401432    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:51.401445    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:51.437735    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:51.437744    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:51.455470    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:51.455480    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:51.466876    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:51.466886    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:51.491624    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:51.491631    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:51.503527    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:51.503538    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:54.023136    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:07:59.025967    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:07:59.026448    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:07:59.065531    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:07:59.065688    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:07:59.088101    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:07:59.088231    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:07:59.104083    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:07:59.104176    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:07:59.116407    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:07:59.116486    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:07:59.127445    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:07:59.127513    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:07:59.147882    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:07:59.147957    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:07:59.158244    8853 logs.go:282] 0 containers: []
	W1007 05:07:59.158255    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:07:59.158312    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:07:59.169199    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:07:59.169221    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:07:59.169227    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:07:59.173779    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:07:59.173788    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:07:59.207981    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:07:59.207996    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:07:59.222756    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:07:59.222765    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:07:59.234268    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:07:59.234278    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:07:59.252169    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:07:59.252179    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:07:59.263733    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:07:59.263743    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:07:59.300141    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:07:59.300153    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:07:59.314448    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:07:59.314460    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:07:59.326352    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:07:59.326363    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:07:59.338957    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:07:59.338968    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:07:59.352871    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:07:59.352881    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:07:59.364397    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:07:59.364406    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:07:59.387845    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:07:59.387852    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:07:59.398918    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:07:59.398929    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:01.912396    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:06.915172    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:06.915699    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:06.955204    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:06.955362    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:06.977550    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:06.977662    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:06.993210    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:06.993303    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:07.005417    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:07.005497    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:07.017042    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:07.017117    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:07.036160    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:07.036233    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:07.046834    8853 logs.go:282] 0 containers: []
	W1007 05:08:07.046850    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:07.046920    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:07.057662    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:07.057682    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:07.057687    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:07.077528    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:07.077542    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:07.089826    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:07.089836    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:07.102224    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:07.102237    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:07.106944    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:07.106950    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:07.118759    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:07.118772    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:07.130581    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:07.130594    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:07.142110    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:07.142121    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:07.178187    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:07.178194    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:07.212523    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:07.212534    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:07.226704    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:07.226719    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:07.239105    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:07.239118    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:07.250604    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:07.250615    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:07.265367    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:07.265379    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:07.287413    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:07.287423    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:09.815268    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:14.817306    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:14.817627    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:14.850745    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:14.850859    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:14.870565    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:14.870675    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:14.887906    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:14.887984    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:14.902222    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:14.902299    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:14.914685    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:14.914763    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:14.926309    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:14.926369    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:14.940004    8853 logs.go:282] 0 containers: []
	W1007 05:08:14.940017    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:14.940064    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:14.951031    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:14.951051    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:14.951057    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:14.986189    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:14.986197    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:15.021324    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:15.021335    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:15.032855    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:15.032865    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:15.044322    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:15.044333    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:15.058420    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:15.058431    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:15.069685    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:15.069695    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:15.074316    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:15.074325    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:15.088642    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:15.088653    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:15.102455    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:15.102465    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:15.113898    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:15.113912    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:15.130320    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:15.130329    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:15.141894    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:15.141903    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:15.165832    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:15.165842    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:15.177467    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:15.177477    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:17.697367    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:22.698595    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:22.699065    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:22.729603    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:22.729737    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:22.748244    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:22.748350    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:22.762548    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:22.762637    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:22.774327    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:22.774414    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:22.788539    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:22.788621    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:22.798963    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:22.799032    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:22.808841    8853 logs.go:282] 0 containers: []
	W1007 05:08:22.808855    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:22.808921    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:22.819853    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:22.819870    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:22.819876    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:22.862596    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:22.862611    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:22.874462    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:22.874478    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:22.890048    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:22.890058    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:22.902190    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:22.902206    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:22.914413    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:22.914427    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:22.949709    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:22.949719    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:22.963879    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:22.963889    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:22.978171    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:22.978182    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:22.989903    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:22.989914    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:23.006915    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:23.006925    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:23.018607    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:23.018615    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:23.043991    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:23.044003    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:23.048588    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:23.048596    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:23.059819    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:23.059831    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:25.575906    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:30.578521    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:30.578795    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:30.602703    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:30.602835    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:30.624641    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:30.624721    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:30.637023    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:30.637100    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:30.650812    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:30.650877    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:30.661465    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:30.661542    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:30.673574    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:30.673648    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:30.684777    8853 logs.go:282] 0 containers: []
	W1007 05:08:30.684789    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:30.684863    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:30.706115    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:30.706135    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:30.706141    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:30.718397    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:30.718413    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:30.730574    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:30.730590    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:30.746225    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:30.746236    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:30.765205    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:30.765225    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:30.778400    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:30.778416    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:30.782991    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:30.783002    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:30.798084    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:30.798097    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:30.817274    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:30.817288    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:30.829247    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:30.829258    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:30.856123    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:30.856137    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:30.894354    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:30.894373    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:30.930601    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:30.930612    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:30.943581    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:30.943594    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:30.958929    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:30.958945    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:33.473446    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:38.475452    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:38.476105    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:38.518660    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:38.518818    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:38.542202    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:38.542326    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:38.557421    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:38.557512    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:38.577889    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:38.577968    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:38.594809    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:38.594885    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:38.607480    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:38.607558    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:38.626777    8853 logs.go:282] 0 containers: []
	W1007 05:08:38.626791    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:38.626847    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:38.637451    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:38.637470    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:38.637476    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:38.653045    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:38.653058    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:38.665178    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:38.665190    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:38.681324    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:38.681337    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:38.717110    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:38.717124    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:38.732390    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:38.732401    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:38.748229    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:38.748243    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:38.760327    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:38.760336    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:38.777837    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:38.777848    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:38.795686    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:38.795696    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:38.832604    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:38.832615    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:38.844896    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:38.844907    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:38.856560    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:38.856570    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:38.881063    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:38.881074    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:38.885481    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:38.885487    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:41.400084    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:46.402441    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:46.402900    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:46.444177    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:46.444319    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:46.466501    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:46.466622    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:46.482934    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:46.483030    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:46.495146    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:46.495224    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:46.505707    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:46.505780    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:46.522128    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:46.522203    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:46.532560    8853 logs.go:282] 0 containers: []
	W1007 05:08:46.532572    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:46.532634    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:46.543524    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:46.543545    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:46.543550    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:46.555008    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:46.555020    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:46.589792    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:46.589799    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:46.602000    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:46.602013    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:46.613980    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:46.614017    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:46.630067    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:46.630079    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:46.641833    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:46.641843    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:46.658651    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:46.658660    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:46.663704    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:46.663711    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:46.676200    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:46.676216    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:46.687815    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:46.687824    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:46.724457    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:46.724466    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:46.748860    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:46.748868    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:46.773629    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:46.773635    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:46.789475    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:46.789487    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:49.302321    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:08:54.304829    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:08:54.305061    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:08:54.326507    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:08:54.326614    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:08:54.341358    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:08:54.341440    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:08:54.353703    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:08:54.353779    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:08:54.364584    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:08:54.364662    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:08:54.375889    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:08:54.375959    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:08:54.387114    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:08:54.387188    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:08:54.398696    8853 logs.go:282] 0 containers: []
	W1007 05:08:54.398709    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:08:54.398768    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:08:54.409076    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:08:54.409094    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:08:54.409099    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:08:54.420836    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:08:54.420845    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:08:54.435395    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:08:54.435407    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:08:54.447151    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:08:54.447163    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:08:54.459008    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:08:54.459021    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:08:54.493540    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:08:54.493547    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:08:54.497415    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:08:54.497423    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:08:54.511286    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:08:54.511295    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:08:54.525162    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:08:54.525172    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:08:54.537258    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:08:54.537274    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:08:54.549185    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:08:54.549195    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:08:54.573713    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:08:54.573722    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:08:54.591135    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:08:54.591147    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:08:54.602943    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:08:54.602954    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:08:54.638314    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:08:54.638327    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:08:57.155485    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:02.157773    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:02.158281    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:09:02.199833    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:09:02.199982    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:09:02.224626    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:09:02.224766    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:09:02.240268    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:09:02.240359    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:09:02.252958    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:09:02.253036    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:09:02.267919    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:09:02.267985    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:09:02.279200    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:09:02.279260    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:09:02.294058    8853 logs.go:282] 0 containers: []
	W1007 05:09:02.294067    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:09:02.294121    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:09:02.304259    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:09:02.304275    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:09:02.304281    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:09:02.318717    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:09:02.318730    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:09:02.330511    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:09:02.330523    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:09:02.345407    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:09:02.345418    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:09:02.356918    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:09:02.356929    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:09:02.375392    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:09:02.375405    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:09:02.379954    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:09:02.379960    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:09:02.392231    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:09:02.392243    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:09:02.427517    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:09:02.427531    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:09:02.439213    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:09:02.439222    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:09:02.463794    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:09:02.463805    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:09:02.497991    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:09:02.497998    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:09:02.509955    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:09:02.509965    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:09:02.523819    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:09:02.523829    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:09:02.547418    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:09:02.547425    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:09:05.062224    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:10.064341    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:10.064446    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:09:10.076212    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:09:10.076286    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:09:10.090131    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:09:10.090186    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:09:10.101541    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:09:10.101606    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:09:10.114129    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:09:10.114200    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:09:10.125522    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:09:10.125592    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:09:10.136713    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:09:10.136780    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:09:10.147000    8853 logs.go:282] 0 containers: []
	W1007 05:09:10.147008    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:09:10.147049    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:09:10.158138    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:09:10.158153    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:09:10.158158    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:09:10.171046    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:09:10.171054    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:09:10.185692    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:09:10.185699    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:09:10.204573    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:09:10.204588    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:09:10.219694    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:09:10.219708    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:09:10.239179    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:09:10.239199    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:09:10.254260    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:09:10.254277    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:09:10.279023    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:09:10.279046    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:09:10.284276    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:09:10.284290    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:09:10.297983    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:09:10.298001    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:09:10.311663    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:09:10.311676    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:09:10.349232    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:09:10.349246    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:09:10.361196    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:09:10.361206    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:09:10.385254    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:09:10.385266    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:09:10.397884    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:09:10.397895    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:09:12.935230    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:17.937933    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:17.938041    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:09:17.954789    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:09:17.954859    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:09:17.966054    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:09:17.966138    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:09:17.995088    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:09:17.995183    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:09:18.009934    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:09:18.010030    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:09:18.021781    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:09:18.021856    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:09:18.032479    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:09:18.032542    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:09:18.042630    8853 logs.go:282] 0 containers: []
	W1007 05:09:18.042642    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:09:18.042705    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:09:18.052938    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:09:18.052957    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:09:18.052962    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:09:18.087373    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:09:18.087381    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:09:18.091495    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:09:18.091503    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:09:18.125201    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:09:18.125215    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:09:18.136942    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:09:18.136951    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:09:18.149223    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:09:18.149236    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:09:18.164924    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:09:18.164934    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:09:18.187747    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:09:18.187754    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:09:18.202038    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:09:18.202048    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:09:18.223210    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:09:18.223225    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:09:18.235872    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:09:18.235881    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:09:18.249777    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:09:18.249786    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:09:18.261775    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:09:18.261785    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:09:18.273605    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:09:18.273615    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:09:18.290819    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:09:18.290829    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:09:20.802740    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:25.804990    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:25.805604    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:09:25.849262    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:09:25.849426    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:09:25.871233    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:09:25.871370    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:09:25.887431    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:09:25.887523    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:09:25.900050    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:09:25.900124    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:09:25.910611    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:09:25.910683    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:09:25.925794    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:09:25.925865    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:09:25.936394    8853 logs.go:282] 0 containers: []
	W1007 05:09:25.936406    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:09:25.936470    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:09:25.947380    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:09:25.947403    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:09:25.947408    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:09:25.965688    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:09:25.965700    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:09:25.978064    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:09:25.978076    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:09:25.996097    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:09:25.996107    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:09:26.009623    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:09:26.009634    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:09:26.044421    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:09:26.044434    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:09:26.056823    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:09:26.056834    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:09:26.069148    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:09:26.069160    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:09:26.080887    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:09:26.080898    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:09:26.106193    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:09:26.106203    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:09:26.123004    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:09:26.123016    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:09:26.135187    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:09:26.135201    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:09:26.171053    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:09:26.171063    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:09:26.175216    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:09:26.175223    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:09:26.190334    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:09:26.190348    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:09:28.704960    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:33.706985    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:33.707088    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:09:33.718406    8853 logs.go:282] 1 containers: [e0f3b5c1f824]
	I1007 05:09:33.718471    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:09:33.731849    8853 logs.go:282] 1 containers: [93e2eb0cde8d]
	I1007 05:09:33.731917    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:09:33.743664    8853 logs.go:282] 4 containers: [44dd7e315298 06408f87393d 1cec2ba8f1ac 001d84d5dc7f]
	I1007 05:09:33.743746    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:09:33.755346    8853 logs.go:282] 1 containers: [f9668a0390fc]
	I1007 05:09:33.755401    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:09:33.766763    8853 logs.go:282] 1 containers: [242d7f4381ff]
	I1007 05:09:33.766828    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:09:33.777995    8853 logs.go:282] 1 containers: [8d90b2575520]
	I1007 05:09:33.778059    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:09:33.790278    8853 logs.go:282] 0 containers: []
	W1007 05:09:33.790295    8853 logs.go:284] No container was found matching "kindnet"
	I1007 05:09:33.790375    8853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:09:33.801810    8853 logs.go:282] 1 containers: [b3100f76e01a]
	I1007 05:09:33.801825    8853 logs.go:123] Gathering logs for kubelet ...
	I1007 05:09:33.801830    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:09:33.838845    8853 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:09:33.838861    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:09:33.878308    8853 logs.go:123] Gathering logs for kube-apiserver [e0f3b5c1f824] ...
	I1007 05:09:33.878320    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f3b5c1f824"
	I1007 05:09:33.894789    8853 logs.go:123] Gathering logs for kube-scheduler [f9668a0390fc] ...
	I1007 05:09:33.894798    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9668a0390fc"
	I1007 05:09:33.912088    8853 logs.go:123] Gathering logs for coredns [1cec2ba8f1ac] ...
	I1007 05:09:33.912104    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cec2ba8f1ac"
	I1007 05:09:33.925310    8853 logs.go:123] Gathering logs for kube-controller-manager [8d90b2575520] ...
	I1007 05:09:33.925320    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d90b2575520"
	I1007 05:09:33.949317    8853 logs.go:123] Gathering logs for dmesg ...
	I1007 05:09:33.949325    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:09:33.953734    8853 logs.go:123] Gathering logs for etcd [93e2eb0cde8d] ...
	I1007 05:09:33.953744    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e2eb0cde8d"
	I1007 05:09:33.970306    8853 logs.go:123] Gathering logs for coredns [44dd7e315298] ...
	I1007 05:09:33.970317    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44dd7e315298"
	I1007 05:09:33.984179    8853 logs.go:123] Gathering logs for storage-provisioner [b3100f76e01a] ...
	I1007 05:09:33.984196    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3100f76e01a"
	I1007 05:09:34.000999    8853 logs.go:123] Gathering logs for Docker ...
	I1007 05:09:34.001011    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:09:34.025501    8853 logs.go:123] Gathering logs for container status ...
	I1007 05:09:34.025518    8853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:09:34.038458    8853 logs.go:123] Gathering logs for coredns [06408f87393d] ...
	I1007 05:09:34.038468    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06408f87393d"
	I1007 05:09:34.050535    8853 logs.go:123] Gathering logs for coredns [001d84d5dc7f] ...
	I1007 05:09:34.050546    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 001d84d5dc7f"
	I1007 05:09:34.063671    8853 logs.go:123] Gathering logs for kube-proxy [242d7f4381ff] ...
	I1007 05:09:34.063683    8853 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 242d7f4381ff"
	I1007 05:09:36.577917    8853 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:09:41.580840    8853 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:09:41.589247    8853 out.go:201] 
	W1007 05:09:41.594131    8853 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:09:41.594170    8853 out.go:270] * 
	* 
	W1007 05:09:41.596634    8853 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:41.608985    8853 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-013000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (564.55s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.816480834s)

                                                
                                                
-- stdout --
	* [pause-908000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-908000" primary control-plane node in "pause-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-908000 -n pause-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-908000 -n pause-908000: exit status 7 (66.815667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 : exit status 80 (9.838627125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-602000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-602000" primary control-plane node in "NoKubernetes-602000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-602000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-602000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000: exit status 7 (36.396625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 : exit status 80 (5.789536625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-602000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-602000
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000: exit status 7 (57.8685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 : exit status 80 (5.7814925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-602000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-602000
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000: exit status 7 (70.231292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 : exit status 80 (5.781970375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-602000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-602000
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-602000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-602000 -n NoKubernetes-602000: exit status 7 (34.172375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.822719083s)

                                                
                                                
-- stdout --
	* [auto-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-842000" primary control-plane node in "auto-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:08:20.872642    9046 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:08:20.872784    9046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:20.872792    9046 out.go:358] Setting ErrFile to fd 2...
	I1007 05:08:20.872795    9046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:20.872941    9046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:08:20.874110    9046 out.go:352] Setting JSON to false
	I1007 05:08:20.892281    9046 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5871,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:08:20.892365    9046 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:08:20.898023    9046 out.go:177] * [auto-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:08:20.905957    9046 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:08:20.906021    9046 notify.go:220] Checking for updates...
	I1007 05:08:20.912935    9046 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:08:20.915969    9046 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:08:20.918912    9046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:08:20.921969    9046 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:08:20.924874    9046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:08:20.928330    9046 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:08:20.928404    9046 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:08:20.928449    9046 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:08:20.932935    9046 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:08:20.939966    9046 start.go:297] selected driver: qemu2
	I1007 05:08:20.939972    9046 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:08:20.939979    9046 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:08:20.942535    9046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:08:20.945913    9046 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:08:20.949021    9046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:08:20.949047    9046 cni.go:84] Creating CNI manager for ""
	I1007 05:08:20.949073    9046 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:08:20.949082    9046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:08:20.949120    9046 start.go:340] cluster config:
	{Name:auto-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:08:20.954009    9046 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:08:20.961710    9046 out.go:177] * Starting "auto-842000" primary control-plane node in "auto-842000" cluster
	I1007 05:08:20.965868    9046 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:08:20.965881    9046 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:08:20.965896    9046 cache.go:56] Caching tarball of preloaded images
	I1007 05:08:20.965969    9046 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:08:20.965975    9046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:08:20.966034    9046 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/auto-842000/config.json ...
	I1007 05:08:20.966045    9046 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/auto-842000/config.json: {Name:mka2eb5b3fb6342b1a9a25ad34c4843b4b5c1957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:08:20.966379    9046 start.go:360] acquireMachinesLock for auto-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:20.966429    9046 start.go:364] duration metric: took 43.959µs to acquireMachinesLock for "auto-842000"
	I1007 05:08:20.966442    9046 start.go:93] Provisioning new machine with config: &{Name:auto-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:20.966468    9046 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:20.970792    9046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:20.987377    9046 start.go:159] libmachine.API.Create for "auto-842000" (driver="qemu2")
	I1007 05:08:20.987413    9046 client.go:168] LocalClient.Create starting
	I1007 05:08:20.987489    9046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:20.987543    9046 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:20.987557    9046 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:20.987601    9046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:20.987632    9046 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:20.987641    9046 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:20.988084    9046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:21.134832    9046 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:21.278356    9046 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:21.278363    9046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:21.278545    9046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:21.288376    9046 main.go:141] libmachine: STDOUT: 
	I1007 05:08:21.288400    9046 main.go:141] libmachine: STDERR: 
	I1007 05:08:21.288457    9046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2 +20000M
	I1007 05:08:21.297429    9046 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:21.297444    9046 main.go:141] libmachine: STDERR: 
	I1007 05:08:21.297456    9046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:21.297462    9046 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:21.297475    9046 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:21.297501    9046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:17:13:91:9f:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:21.299419    9046 main.go:141] libmachine: STDOUT: 
	I1007 05:08:21.299432    9046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:21.299453    9046 client.go:171] duration metric: took 312.033917ms to LocalClient.Create
	I1007 05:08:23.301566    9046 start.go:128] duration metric: took 2.335096959s to createHost
	I1007 05:08:23.301618    9046 start.go:83] releasing machines lock for "auto-842000", held for 2.335190959s
	W1007 05:08:23.301635    9046 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:23.305774    9046 out.go:177] * Deleting "auto-842000" in qemu2 ...
	W1007 05:08:23.320642    9046 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:23.320651    9046 start.go:729] Will try again in 5 seconds ...
	I1007 05:08:28.322912    9046 start.go:360] acquireMachinesLock for auto-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:28.323459    9046 start.go:364] duration metric: took 423.75µs to acquireMachinesLock for "auto-842000"
	I1007 05:08:28.323532    9046 start.go:93] Provisioning new machine with config: &{Name:auto-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:28.323757    9046 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:28.332365    9046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:28.378033    9046 start.go:159] libmachine.API.Create for "auto-842000" (driver="qemu2")
	I1007 05:08:28.378095    9046 client.go:168] LocalClient.Create starting
	I1007 05:08:28.378268    9046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:28.378348    9046 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:28.378365    9046 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:28.378445    9046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:28.378506    9046 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:28.378518    9046 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:28.379161    9046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:28.531986    9046 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:28.610585    9046 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:28.610592    9046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:28.610778    9046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:28.620711    9046 main.go:141] libmachine: STDOUT: 
	I1007 05:08:28.620731    9046 main.go:141] libmachine: STDERR: 
	I1007 05:08:28.620790    9046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2 +20000M
	I1007 05:08:28.629204    9046 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:28.629219    9046 main.go:141] libmachine: STDERR: 
	I1007 05:08:28.629236    9046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:28.629242    9046 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:28.629253    9046 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:28.629289    9046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:3e:d6:d7:fe:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/auto-842000/disk.qcow2
	I1007 05:08:28.631087    9046 main.go:141] libmachine: STDOUT: 
	I1007 05:08:28.631102    9046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:28.631115    9046 client.go:171] duration metric: took 253.014125ms to LocalClient.Create
	I1007 05:08:30.632013    9046 start.go:128] duration metric: took 2.308251667s to createHost
	I1007 05:08:30.632037    9046 start.go:83] releasing machines lock for "auto-842000", held for 2.308563708s
	W1007 05:08:30.632124    9046 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:30.639769    9046 out.go:201] 
	W1007 05:08:30.644342    9046 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:08:30.644351    9046 out.go:270] * 
	* 
	W1007 05:08:30.644785    9046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:08:30.656330    9046 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.041181958s)

                                                
                                                
-- stdout --
	* [kindnet-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-842000" primary control-plane node in "kindnet-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:08:33.039655    9155 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:08:33.039793    9155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:33.039796    9155 out.go:358] Setting ErrFile to fd 2...
	I1007 05:08:33.039802    9155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:33.039934    9155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:08:33.041237    9155 out.go:352] Setting JSON to false
	I1007 05:08:33.059409    9155 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5884,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:08:33.059492    9155 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:08:33.064223    9155 out.go:177] * [kindnet-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:08:33.075219    9155 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:08:33.075254    9155 notify.go:220] Checking for updates...
	I1007 05:08:33.082216    9155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:08:33.090161    9155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:08:33.096135    9155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:08:33.103102    9155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:08:33.107216    9155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:08:33.111526    9155 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:08:33.111611    9155 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:08:33.111674    9155 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:08:33.115195    9155 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:08:33.122118    9155 start.go:297] selected driver: qemu2
	I1007 05:08:33.122123    9155 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:08:33.122129    9155 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:08:33.124800    9155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:08:33.128108    9155 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:08:33.131321    9155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:08:33.131352    9155 cni.go:84] Creating CNI manager for "kindnet"
	I1007 05:08:33.131359    9155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 05:08:33.131387    9155 start.go:340] cluster config:
	{Name:kindnet-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:08:33.136344    9155 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:08:33.144195    9155 out.go:177] * Starting "kindnet-842000" primary control-plane node in "kindnet-842000" cluster
	I1007 05:08:33.147119    9155 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:08:33.147134    9155 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:08:33.147150    9155 cache.go:56] Caching tarball of preloaded images
	I1007 05:08:33.147242    9155 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:08:33.147249    9155 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:08:33.147310    9155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kindnet-842000/config.json ...
	I1007 05:08:33.147322    9155 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kindnet-842000/config.json: {Name:mkeae57617978d7fa67257e2c812e4faa9c2e15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:08:33.147614    9155 start.go:360] acquireMachinesLock for kindnet-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:33.147670    9155 start.go:364] duration metric: took 49.125µs to acquireMachinesLock for "kindnet-842000"
	I1007 05:08:33.147684    9155 start.go:93] Provisioning new machine with config: &{Name:kindnet-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:33.147717    9155 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:33.151245    9155 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:33.167886    9155 start.go:159] libmachine.API.Create for "kindnet-842000" (driver="qemu2")
	I1007 05:08:33.167922    9155 client.go:168] LocalClient.Create starting
	I1007 05:08:33.167985    9155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:33.168020    9155 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:33.168031    9155 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:33.168073    9155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:33.168101    9155 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:33.168111    9155 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:33.168457    9155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:33.309247    9155 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:33.566664    9155 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:33.566672    9155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:33.566883    9155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:33.577484    9155 main.go:141] libmachine: STDOUT: 
	I1007 05:08:33.577512    9155 main.go:141] libmachine: STDERR: 
	I1007 05:08:33.577571    9155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2 +20000M
	I1007 05:08:33.586258    9155 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:33.586274    9155 main.go:141] libmachine: STDERR: 
	I1007 05:08:33.586300    9155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:33.586304    9155 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:33.586314    9155 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:33.586351    9155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b6:e4:9b:fa:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:33.588270    9155 main.go:141] libmachine: STDOUT: 
	I1007 05:08:33.588284    9155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:33.588306    9155 client.go:171] duration metric: took 420.378042ms to LocalClient.Create
	I1007 05:08:35.590539    9155 start.go:128] duration metric: took 2.4427975s to createHost
	I1007 05:08:35.590680    9155 start.go:83] releasing machines lock for "kindnet-842000", held for 2.443006959s
	W1007 05:08:35.590732    9155 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:35.604007    9155 out.go:177] * Deleting "kindnet-842000" in qemu2 ...
	W1007 05:08:35.625519    9155 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:35.625570    9155 start.go:729] Will try again in 5 seconds ...
	I1007 05:08:40.626356    9155 start.go:360] acquireMachinesLock for kindnet-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:40.626811    9155 start.go:364] duration metric: took 384.875µs to acquireMachinesLock for "kindnet-842000"
	I1007 05:08:40.626888    9155 start.go:93] Provisioning new machine with config: &{Name:kindnet-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:40.627114    9155 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:40.635630    9155 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:40.680608    9155 start.go:159] libmachine.API.Create for "kindnet-842000" (driver="qemu2")
	I1007 05:08:40.680667    9155 client.go:168] LocalClient.Create starting
	I1007 05:08:40.680821    9155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:40.680933    9155 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:40.680952    9155 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:40.681029    9155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:40.681089    9155 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:40.681109    9155 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:40.681796    9155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:40.834996    9155 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:40.986426    9155 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:40.986439    9155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:40.986644    9155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:40.996673    9155 main.go:141] libmachine: STDOUT: 
	I1007 05:08:40.996694    9155 main.go:141] libmachine: STDERR: 
	I1007 05:08:40.996753    9155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2 +20000M
	I1007 05:08:41.005372    9155 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:41.005387    9155 main.go:141] libmachine: STDERR: 
	I1007 05:08:41.005401    9155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:41.005406    9155 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:41.005416    9155 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:41.005455    9155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:73:b5:eb:cc:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kindnet-842000/disk.qcow2
	I1007 05:08:41.007375    9155 main.go:141] libmachine: STDOUT: 
	I1007 05:08:41.007389    9155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:41.007403    9155 client.go:171] duration metric: took 326.730084ms to LocalClient.Create
	I1007 05:08:43.009618    9155 start.go:128] duration metric: took 2.382460709s to createHost
	I1007 05:08:43.009859    9155 start.go:83] releasing machines lock for "kindnet-842000", held for 2.382907833s
	W1007 05:08:43.010395    9155 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:43.019998    9155 out.go:201] 
	W1007 05:08:43.023971    9155 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:08:43.024003    9155 out.go:270] * 
	* 
	W1007 05:08:43.026342    9155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:08:43.034980    9155 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.786873708s)

                                                
                                                
-- stdout --
	* [calico-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-842000" primary control-plane node in "calico-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:08:45.508680    9268 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:08:45.508830    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:45.508834    9268 out.go:358] Setting ErrFile to fd 2...
	I1007 05:08:45.508836    9268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:45.508964    9268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:08:45.510106    9268 out.go:352] Setting JSON to false
	I1007 05:08:45.528689    9268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5896,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:08:45.528774    9268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:08:45.534029    9268 out.go:177] * [calico-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:08:45.541962    9268 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:08:45.542012    9268 notify.go:220] Checking for updates...
	I1007 05:08:45.549006    9268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:08:45.551931    9268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:08:45.554923    9268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:08:45.558006    9268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:08:45.560959    9268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:08:45.564387    9268 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:08:45.564468    9268 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:08:45.564533    9268 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:08:45.568939    9268 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:08:45.575933    9268 start.go:297] selected driver: qemu2
	I1007 05:08:45.575939    9268 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:08:45.575947    9268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:08:45.578619    9268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:08:45.582007    9268 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:08:45.583343    9268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:08:45.583373    9268 cni.go:84] Creating CNI manager for "calico"
	I1007 05:08:45.583381    9268 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1007 05:08:45.583420    9268 start.go:340] cluster config:
	{Name:calico-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:08:45.588223    9268 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:08:45.596001    9268 out.go:177] * Starting "calico-842000" primary control-plane node in "calico-842000" cluster
	I1007 05:08:45.599862    9268 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:08:45.599880    9268 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:08:45.599887    9268 cache.go:56] Caching tarball of preloaded images
	I1007 05:08:45.599962    9268 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:08:45.599968    9268 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:08:45.600033    9268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/calico-842000/config.json ...
	I1007 05:08:45.600044    9268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/calico-842000/config.json: {Name:mk3568f3644689e64b25c2e4101ec0ffad7957ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:08:45.600317    9268 start.go:360] acquireMachinesLock for calico-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:45.600365    9268 start.go:364] duration metric: took 42µs to acquireMachinesLock for "calico-842000"
	I1007 05:08:45.600377    9268 start.go:93] Provisioning new machine with config: &{Name:calico-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:45.600413    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:45.604970    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:45.620330    9268 start.go:159] libmachine.API.Create for "calico-842000" (driver="qemu2")
	I1007 05:08:45.620367    9268 client.go:168] LocalClient.Create starting
	I1007 05:08:45.620441    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:45.620481    9268 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:45.620495    9268 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:45.620535    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:45.620566    9268 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:45.620573    9268 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:45.621037    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:45.763178    9268 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:45.838543    9268 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:45.838550    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:45.838719    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:45.848789    9268 main.go:141] libmachine: STDOUT: 
	I1007 05:08:45.848807    9268 main.go:141] libmachine: STDERR: 
	I1007 05:08:45.848859    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2 +20000M
	I1007 05:08:45.857652    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:45.857672    9268 main.go:141] libmachine: STDERR: 
	I1007 05:08:45.857692    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:45.857698    9268 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:45.857712    9268 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:45.857738    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9e:ac:ef:7d:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:45.859548    9268 main.go:141] libmachine: STDOUT: 
	I1007 05:08:45.859564    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:45.859585    9268 client.go:171] duration metric: took 239.212875ms to LocalClient.Create
	I1007 05:08:47.861843    9268 start.go:128] duration metric: took 2.261399625s to createHost
	I1007 05:08:47.861950    9268 start.go:83] releasing machines lock for "calico-842000", held for 2.261582625s
	W1007 05:08:47.862012    9268 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:47.873234    9268 out.go:177] * Deleting "calico-842000" in qemu2 ...
	W1007 05:08:47.898886    9268 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:47.898916    9268 start.go:729] Will try again in 5 seconds ...
	I1007 05:08:52.901027    9268 start.go:360] acquireMachinesLock for calico-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:52.901445    9268 start.go:364] duration metric: took 372.25µs to acquireMachinesLock for "calico-842000"
	I1007 05:08:52.901523    9268 start.go:93] Provisioning new machine with config: &{Name:calico-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:52.901650    9268 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:52.911986    9268 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:52.952596    9268 start.go:159] libmachine.API.Create for "calico-842000" (driver="qemu2")
	I1007 05:08:52.952657    9268 client.go:168] LocalClient.Create starting
	I1007 05:08:52.952781    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:52.952849    9268 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:52.952864    9268 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:52.952934    9268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:52.952983    9268 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:52.952997    9268 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:52.953617    9268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:53.103024    9268 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:53.197407    9268 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:53.197414    9268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:53.197618    9268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:53.207883    9268 main.go:141] libmachine: STDOUT: 
	I1007 05:08:53.207904    9268 main.go:141] libmachine: STDERR: 
	I1007 05:08:53.207953    9268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2 +20000M
	I1007 05:08:53.216496    9268 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:53.216513    9268 main.go:141] libmachine: STDERR: 
	I1007 05:08:53.216527    9268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:53.216533    9268 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:53.216542    9268 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:53.216599    9268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:68:14:0f:81:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/calico-842000/disk.qcow2
	I1007 05:08:53.218460    9268 main.go:141] libmachine: STDOUT: 
	I1007 05:08:53.218476    9268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:53.218494    9268 client.go:171] duration metric: took 265.833375ms to LocalClient.Create
	I1007 05:08:55.220697    9268 start.go:128] duration metric: took 2.319017875s to createHost
	I1007 05:08:55.220797    9268 start.go:83] releasing machines lock for "calico-842000", held for 2.319340833s
	W1007 05:08:55.221250    9268 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:08:55.233894    9268 out.go:201] 
	W1007 05:08:55.237939    9268 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:08:55.238015    9268 out.go:270] * 
	* 
	W1007 05:08:55.241091    9268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:08:55.249922    9268 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.731501416s)

                                                
                                                
-- stdout --
	* [custom-flannel-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-842000" primary control-plane node in "custom-flannel-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:08:57.828144    9388 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:08:57.828304    9388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:57.828307    9388 out.go:358] Setting ErrFile to fd 2...
	I1007 05:08:57.828310    9388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:08:57.828436    9388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:08:57.829604    9388 out.go:352] Setting JSON to false
	I1007 05:08:57.848265    9388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5908,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:08:57.848333    9388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:08:57.853480    9388 out.go:177] * [custom-flannel-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:08:57.861353    9388 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:08:57.861415    9388 notify.go:220] Checking for updates...
	I1007 05:08:57.868470    9388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:08:57.870024    9388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:08:57.873420    9388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:08:57.876508    9388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:08:57.879494    9388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:08:57.882884    9388 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:08:57.882957    9388 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:08:57.883000    9388 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:08:57.887433    9388 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:08:57.894394    9388 start.go:297] selected driver: qemu2
	I1007 05:08:57.894400    9388 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:08:57.894407    9388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:08:57.896809    9388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:08:57.900413    9388 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:08:57.903494    9388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:08:57.903514    9388 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1007 05:08:57.903522    9388 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1007 05:08:57.903549    9388 start.go:340] cluster config:
	{Name:custom-flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:08:57.908215    9388 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:08:57.916453    9388 out.go:177] * Starting "custom-flannel-842000" primary control-plane node in "custom-flannel-842000" cluster
	I1007 05:08:57.920425    9388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:08:57.920442    9388 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:08:57.920450    9388 cache.go:56] Caching tarball of preloaded images
	I1007 05:08:57.920559    9388 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:08:57.920565    9388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:08:57.920639    9388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/custom-flannel-842000/config.json ...
	I1007 05:08:57.920650    9388 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/custom-flannel-842000/config.json: {Name:mk866698e8f7645b238a78bb3ca6e1e52b919c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:08:57.920896    9388 start.go:360] acquireMachinesLock for custom-flannel-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:08:57.920946    9388 start.go:364] duration metric: took 42.292µs to acquireMachinesLock for "custom-flannel-842000"
	I1007 05:08:57.920959    9388 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:08:57.920993    9388 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:08:57.929389    9388 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:08:57.946525    9388 start.go:159] libmachine.API.Create for "custom-flannel-842000" (driver="qemu2")
	I1007 05:08:57.946554    9388 client.go:168] LocalClient.Create starting
	I1007 05:08:57.946621    9388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:08:57.946656    9388 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:57.946667    9388 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:57.946710    9388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:08:57.946743    9388 main.go:141] libmachine: Decoding PEM data...
	I1007 05:08:57.946750    9388 main.go:141] libmachine: Parsing certificate...
	I1007 05:08:57.947191    9388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:08:58.088923    9388 main.go:141] libmachine: Creating SSH key...
	I1007 05:08:58.165694    9388 main.go:141] libmachine: Creating Disk image...
	I1007 05:08:58.165700    9388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:08:58.165878    9388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:08:58.175597    9388 main.go:141] libmachine: STDOUT: 
	I1007 05:08:58.175616    9388 main.go:141] libmachine: STDERR: 
	I1007 05:08:58.175670    9388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2 +20000M
	I1007 05:08:58.184145    9388 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:08:58.184166    9388 main.go:141] libmachine: STDERR: 
	I1007 05:08:58.184185    9388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:08:58.184191    9388 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:08:58.184202    9388 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:08:58.184229    9388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:af:3a:fa:af:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:08:58.186070    9388 main.go:141] libmachine: STDOUT: 
	I1007 05:08:58.186083    9388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:08:58.186105    9388 client.go:171] duration metric: took 239.545708ms to LocalClient.Create
	I1007 05:09:00.186886    9388 start.go:128] duration metric: took 2.265835417s to createHost
	I1007 05:09:00.186980    9388 start.go:83] releasing machines lock for "custom-flannel-842000", held for 2.266031833s
	W1007 05:09:00.187051    9388 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:00.200040    9388 out.go:177] * Deleting "custom-flannel-842000" in qemu2 ...
	W1007 05:09:00.220379    9388 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:00.220422    9388 start.go:729] Will try again in 5 seconds ...
	I1007 05:09:05.222719    9388 start.go:360] acquireMachinesLock for custom-flannel-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:05.223288    9388 start.go:364] duration metric: took 446.5µs to acquireMachinesLock for "custom-flannel-842000"
	I1007 05:09:05.223362    9388 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:05.223674    9388 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:05.231335    9388 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:05.279800    9388 start.go:159] libmachine.API.Create for "custom-flannel-842000" (driver="qemu2")
	I1007 05:09:05.279855    9388 client.go:168] LocalClient.Create starting
	I1007 05:09:05.279986    9388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:05.280078    9388 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:05.280097    9388 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:05.280152    9388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:05.280215    9388 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:05.280229    9388 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:05.280805    9388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:05.432660    9388 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:05.467799    9388 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:05.467804    9388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:05.467981    9388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:09:05.477941    9388 main.go:141] libmachine: STDOUT: 
	I1007 05:09:05.477959    9388 main.go:141] libmachine: STDERR: 
	I1007 05:09:05.478021    9388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2 +20000M
	I1007 05:09:05.486653    9388 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:05.486678    9388 main.go:141] libmachine: STDERR: 
	I1007 05:09:05.486694    9388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:09:05.486699    9388 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:05.486709    9388 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:05.486742    9388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:9d:ec:59:10:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/custom-flannel-842000/disk.qcow2
	I1007 05:09:05.488583    9388 main.go:141] libmachine: STDOUT: 
	I1007 05:09:05.488602    9388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:05.488631    9388 client.go:171] duration metric: took 208.770917ms to LocalClient.Create
	I1007 05:09:07.490840    9388 start.go:128] duration metric: took 2.267134083s to createHost
	I1007 05:09:07.490939    9388 start.go:83] releasing machines lock for "custom-flannel-842000", held for 2.267632042s
	W1007 05:09:07.491399    9388 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:07.499152    9388 out.go:201] 
	W1007 05:09:07.503301    9388 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:09:07.503342    9388 out.go:270] * 
	* 
	W1007 05:09:07.505891    9388 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:07.514108    9388 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.890005208s)

                                                
                                                
-- stdout --
	* [false-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-842000" primary control-plane node in "false-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:09:10.089573    9508 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:09:10.089768    9508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:10.089771    9508 out.go:358] Setting ErrFile to fd 2...
	I1007 05:09:10.089774    9508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:10.089937    9508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:09:10.091752    9508 out.go:352] Setting JSON to false
	I1007 05:09:10.112303    9508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5921,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:09:10.112412    9508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:09:10.117948    9508 out.go:177] * [false-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:09:10.125983    9508 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:09:10.125995    9508 notify.go:220] Checking for updates...
	I1007 05:09:10.131915    9508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:09:10.134945    9508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:09:10.135981    9508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:09:10.139039    9508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:09:10.141951    9508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:09:10.145340    9508 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:09:10.145413    9508 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:09:10.145469    9508 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:09:10.148877    9508 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:09:10.155927    9508 start.go:297] selected driver: qemu2
	I1007 05:09:10.155935    9508 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:09:10.155943    9508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:09:10.158640    9508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:09:10.161872    9508 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:09:10.165051    9508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:09:10.165072    9508 cni.go:84] Creating CNI manager for "false"
	I1007 05:09:10.165105    9508 start.go:340] cluster config:
	{Name:false-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:09:10.170195    9508 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:09:10.177901    9508 out.go:177] * Starting "false-842000" primary control-plane node in "false-842000" cluster
	I1007 05:09:10.181911    9508 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:09:10.181955    9508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:09:10.181964    9508 cache.go:56] Caching tarball of preloaded images
	I1007 05:09:10.182075    9508 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:09:10.182082    9508 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:09:10.182150    9508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/false-842000/config.json ...
	I1007 05:09:10.182165    9508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/false-842000/config.json: {Name:mk260a6dc10e33346f587a256fbaae3f52e8251c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:09:10.182471    9508 start.go:360] acquireMachinesLock for false-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:10.182531    9508 start.go:364] duration metric: took 52.375µs to acquireMachinesLock for "false-842000"
	I1007 05:09:10.182545    9508 start.go:93] Provisioning new machine with config: &{Name:false-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:10.182570    9508 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:10.185915    9508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:10.201902    9508 start.go:159] libmachine.API.Create for "false-842000" (driver="qemu2")
	I1007 05:09:10.201939    9508 client.go:168] LocalClient.Create starting
	I1007 05:09:10.202035    9508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:10.202080    9508 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:10.202089    9508 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:10.202132    9508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:10.202164    9508 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:10.202170    9508 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:10.202563    9508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:10.347872    9508 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:10.383948    9508 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:10.383959    9508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:10.384164    9508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:10.395061    9508 main.go:141] libmachine: STDOUT: 
	I1007 05:09:10.395096    9508 main.go:141] libmachine: STDERR: 
	I1007 05:09:10.395168    9508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2 +20000M
	I1007 05:09:10.405120    9508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:10.405142    9508 main.go:141] libmachine: STDERR: 
	I1007 05:09:10.405169    9508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:10.405175    9508 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:10.405187    9508 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:10.405211    9508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:b3:b4:f6:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:10.407410    9508 main.go:141] libmachine: STDOUT: 
	I1007 05:09:10.407462    9508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:10.407482    9508 client.go:171] duration metric: took 205.537916ms to LocalClient.Create
	I1007 05:09:12.409605    9508 start.go:128] duration metric: took 2.227028166s to createHost
	I1007 05:09:12.409656    9508 start.go:83] releasing machines lock for "false-842000", held for 2.227127292s
	W1007 05:09:12.409682    9508 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:12.418357    9508 out.go:177] * Deleting "false-842000" in qemu2 ...
	W1007 05:09:12.434054    9508 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:12.434065    9508 start.go:729] Will try again in 5 seconds ...
	I1007 05:09:17.436277    9508 start.go:360] acquireMachinesLock for false-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:17.436919    9508 start.go:364] duration metric: took 519.167µs to acquireMachinesLock for "false-842000"
	I1007 05:09:17.437061    9508 start.go:93] Provisioning new machine with config: &{Name:false-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:17.437311    9508 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:17.442998    9508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:17.493859    9508 start.go:159] libmachine.API.Create for "false-842000" (driver="qemu2")
	I1007 05:09:17.493914    9508 client.go:168] LocalClient.Create starting
	I1007 05:09:17.494064    9508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:17.494144    9508 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:17.494163    9508 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:17.494222    9508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:17.494279    9508 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:17.494292    9508 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:17.494853    9508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:17.647380    9508 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:17.877997    9508 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:17.878011    9508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:17.878251    9508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:17.889038    9508 main.go:141] libmachine: STDOUT: 
	I1007 05:09:17.889066    9508 main.go:141] libmachine: STDERR: 
	I1007 05:09:17.889131    9508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2 +20000M
	I1007 05:09:17.897949    9508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:17.897968    9508 main.go:141] libmachine: STDERR: 
	I1007 05:09:17.897989    9508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:17.897997    9508 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:17.898006    9508 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:17.898037    9508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:75:82:d5:e2:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/false-842000/disk.qcow2
	I1007 05:09:17.899959    9508 main.go:141] libmachine: STDOUT: 
	I1007 05:09:17.899973    9508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:17.899986    9508 client.go:171] duration metric: took 406.066459ms to LocalClient.Create
	I1007 05:09:19.902194    9508 start.go:128] duration metric: took 2.464858208s to createHost
	I1007 05:09:19.902298    9508 start.go:83] releasing machines lock for "false-842000", held for 2.465361208s
	W1007 05:09:19.902734    9508 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:19.913593    9508 out.go:201] 
	W1007 05:09:19.917608    9508 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:09:19.917650    9508 out.go:270] * 
	* 
	W1007 05:09:19.920356    9508 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:19.930409    9508 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.78099425s)

                                                
                                                
-- stdout --
	* [enable-default-cni-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-842000" primary control-plane node in "enable-default-cni-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:09:22.249461    9617 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:09:22.249600    9617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:22.249604    9617 out.go:358] Setting ErrFile to fd 2...
	I1007 05:09:22.249606    9617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:22.249735    9617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:09:22.250927    9617 out.go:352] Setting JSON to false
	I1007 05:09:22.270101    9617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5933,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:09:22.270177    9617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:09:22.275689    9617 out.go:177] * [enable-default-cni-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:09:22.282573    9617 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:09:22.282640    9617 notify.go:220] Checking for updates...
	I1007 05:09:22.289602    9617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:09:22.292612    9617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:09:22.295613    9617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:09:22.298620    9617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:09:22.299870    9617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:09:22.302908    9617 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:09:22.302981    9617 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:09:22.303035    9617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:09:22.307657    9617 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:09:22.312611    9617 start.go:297] selected driver: qemu2
	I1007 05:09:22.312617    9617 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:09:22.312623    9617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:09:22.315018    9617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:09:22.317607    9617 out.go:177] * Automatically selected the socket_vmnet network
	E1007 05:09:22.320782    9617 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1007 05:09:22.320798    9617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:09:22.320821    9617 cni.go:84] Creating CNI manager for "bridge"
	I1007 05:09:22.320825    9617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:09:22.320849    9617 start.go:340] cluster config:
	{Name:enable-default-cni-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:09:22.324945    9617 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:09:22.332622    9617 out.go:177] * Starting "enable-default-cni-842000" primary control-plane node in "enable-default-cni-842000" cluster
	I1007 05:09:22.336622    9617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:09:22.336633    9617 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:09:22.336641    9617 cache.go:56] Caching tarball of preloaded images
	I1007 05:09:22.336705    9617 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:09:22.336710    9617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:09:22.336769    9617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/enable-default-cni-842000/config.json ...
	I1007 05:09:22.336779    9617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/enable-default-cni-842000/config.json: {Name:mk7ae1c73a17a7a84faea3e624223491505e47c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:09:22.337086    9617 start.go:360] acquireMachinesLock for enable-default-cni-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:22.337129    9617 start.go:364] duration metric: took 36.041µs to acquireMachinesLock for "enable-default-cni-842000"
	I1007 05:09:22.337141    9617 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:22.337167    9617 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:22.340736    9617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:22.355266    9617 start.go:159] libmachine.API.Create for "enable-default-cni-842000" (driver="qemu2")
	I1007 05:09:22.355287    9617 client.go:168] LocalClient.Create starting
	I1007 05:09:22.355358    9617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:22.355398    9617 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:22.355410    9617 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:22.355457    9617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:22.355484    9617 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:22.355491    9617 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:22.355926    9617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:22.499025    9617 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:22.590242    9617 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:22.590251    9617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:22.590456    9617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:22.600495    9617 main.go:141] libmachine: STDOUT: 
	I1007 05:09:22.600514    9617 main.go:141] libmachine: STDERR: 
	I1007 05:09:22.600576    9617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2 +20000M
	I1007 05:09:22.608978    9617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:22.608998    9617 main.go:141] libmachine: STDERR: 
	I1007 05:09:22.609019    9617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:22.609024    9617 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:22.609034    9617 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:22.609066    9617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:74:12:72:f2:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:22.610883    9617 main.go:141] libmachine: STDOUT: 
	I1007 05:09:22.610896    9617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:22.610915    9617 client.go:171] duration metric: took 255.622375ms to LocalClient.Create
	I1007 05:09:24.613191    9617 start.go:128] duration metric: took 2.275979958s to createHost
	I1007 05:09:24.613288    9617 start.go:83] releasing machines lock for "enable-default-cni-842000", held for 2.276156125s
	W1007 05:09:24.613345    9617 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:24.623708    9617 out.go:177] * Deleting "enable-default-cni-842000" in qemu2 ...
	W1007 05:09:24.649430    9617 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:24.649463    9617 start.go:729] Will try again in 5 seconds ...
	I1007 05:09:29.651708    9617 start.go:360] acquireMachinesLock for enable-default-cni-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:29.652383    9617 start.go:364] duration metric: took 568.167µs to acquireMachinesLock for "enable-default-cni-842000"
	I1007 05:09:29.652474    9617 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:29.652776    9617 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:29.662384    9617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:29.709685    9617 start.go:159] libmachine.API.Create for "enable-default-cni-842000" (driver="qemu2")
	I1007 05:09:29.709751    9617 client.go:168] LocalClient.Create starting
	I1007 05:09:29.709972    9617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:29.710060    9617 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:29.710080    9617 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:29.710159    9617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:29.710218    9617 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:29.710234    9617 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:29.710815    9617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:29.863601    9617 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:29.935366    9617 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:29.935376    9617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:29.935576    9617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:29.945869    9617 main.go:141] libmachine: STDOUT: 
	I1007 05:09:29.945886    9617 main.go:141] libmachine: STDERR: 
	I1007 05:09:29.945951    9617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2 +20000M
	I1007 05:09:29.954604    9617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:29.954619    9617 main.go:141] libmachine: STDERR: 
	I1007 05:09:29.954634    9617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:29.954644    9617 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:29.954653    9617 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:29.954689    9617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:77:f1:85:bc:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/enable-default-cni-842000/disk.qcow2
	I1007 05:09:29.956586    9617 main.go:141] libmachine: STDOUT: 
	I1007 05:09:29.956600    9617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:29.956611    9617 client.go:171] duration metric: took 246.84325ms to LocalClient.Create
	I1007 05:09:31.958821    9617 start.go:128] duration metric: took 2.306002584s to createHost
	I1007 05:09:31.958917    9617 start.go:83] releasing machines lock for "enable-default-cni-842000", held for 2.306514375s
	W1007 05:09:31.959391    9617 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:31.967823    9617 out.go:201] 
	W1007 05:09:31.973062    9617 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:09:31.973086    9617 out.go:270] * 
	* 
	W1007 05:09:31.975706    9617 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:31.984030    9617 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.755987084s)

                                                
                                                
-- stdout --
	* [flannel-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-842000" primary control-plane node in "flannel-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:09:34.320082    9726 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:09:34.320266    9726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:34.320269    9726 out.go:358] Setting ErrFile to fd 2...
	I1007 05:09:34.320271    9726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:34.320425    9726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:09:34.321574    9726 out.go:352] Setting JSON to false
	I1007 05:09:34.339902    9726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5945,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:09:34.339983    9726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:09:34.345224    9726 out.go:177] * [flannel-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:09:34.353158    9726 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:09:34.353239    9726 notify.go:220] Checking for updates...
	I1007 05:09:34.360275    9726 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:09:34.361653    9726 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:09:34.364279    9726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:09:34.367277    9726 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:09:34.370270    9726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:09:34.373520    9726 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:09:34.373595    9726 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:09:34.373652    9726 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:09:34.378282    9726 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:09:34.385262    9726 start.go:297] selected driver: qemu2
	I1007 05:09:34.385267    9726 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:09:34.385274    9726 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:09:34.387647    9726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:09:34.390290    9726 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:09:34.393375    9726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:09:34.393396    9726 cni.go:84] Creating CNI manager for "flannel"
	I1007 05:09:34.393412    9726 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1007 05:09:34.393447    9726 start.go:340] cluster config:
	{Name:flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:09:34.397851    9726 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:09:34.406297    9726 out.go:177] * Starting "flannel-842000" primary control-plane node in "flannel-842000" cluster
	I1007 05:09:34.410268    9726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:09:34.410282    9726 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:09:34.410291    9726 cache.go:56] Caching tarball of preloaded images
	I1007 05:09:34.410381    9726 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:09:34.410386    9726 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:09:34.410452    9726 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/flannel-842000/config.json ...
	I1007 05:09:34.410462    9726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/flannel-842000/config.json: {Name:mkeb627b1f4167462ab3e3f96ef8781aea7876a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:09:34.410697    9726 start.go:360] acquireMachinesLock for flannel-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:34.410741    9726 start.go:364] duration metric: took 39µs to acquireMachinesLock for "flannel-842000"
	I1007 05:09:34.410753    9726 start.go:93] Provisioning new machine with config: &{Name:flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:34.410793    9726 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:34.415270    9726 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:34.430690    9726 start.go:159] libmachine.API.Create for "flannel-842000" (driver="qemu2")
	I1007 05:09:34.430722    9726 client.go:168] LocalClient.Create starting
	I1007 05:09:34.430795    9726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:34.430834    9726 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:34.430847    9726 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:34.430886    9726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:34.430914    9726 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:34.430923    9726 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:34.431286    9726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:34.574264    9726 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:34.629462    9726 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:34.629469    9726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:34.629658    9726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:34.639685    9726 main.go:141] libmachine: STDOUT: 
	I1007 05:09:34.639708    9726 main.go:141] libmachine: STDERR: 
	I1007 05:09:34.639776    9726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2 +20000M
	I1007 05:09:34.649055    9726 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:34.649085    9726 main.go:141] libmachine: STDERR: 
	I1007 05:09:34.649104    9726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:34.649113    9726 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:34.649124    9726 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:34.649154    9726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f2:59:a6:92:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:34.651090    9726 main.go:141] libmachine: STDOUT: 
	I1007 05:09:34.651104    9726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:34.651124    9726 client.go:171] duration metric: took 220.39725ms to LocalClient.Create
	I1007 05:09:36.653397    9726 start.go:128] duration metric: took 2.242581666s to createHost
	I1007 05:09:36.653470    9726 start.go:83] releasing machines lock for "flannel-842000", held for 2.242725459s
	W1007 05:09:36.653526    9726 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:36.664740    9726 out.go:177] * Deleting "flannel-842000" in qemu2 ...
	W1007 05:09:36.689590    9726 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:36.689638    9726 start.go:729] Will try again in 5 seconds ...
	I1007 05:09:41.691713    9726 start.go:360] acquireMachinesLock for flannel-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:41.691882    9726 start.go:364] duration metric: took 139.708µs to acquireMachinesLock for "flannel-842000"
	I1007 05:09:41.691919    9726 start.go:93] Provisioning new machine with config: &{Name:flannel-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:41.691997    9726 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:41.695962    9726 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:41.715555    9726 start.go:159] libmachine.API.Create for "flannel-842000" (driver="qemu2")
	I1007 05:09:41.715588    9726 client.go:168] LocalClient.Create starting
	I1007 05:09:41.715676    9726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:41.715721    9726 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:41.715730    9726 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:41.715770    9726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:41.715799    9726 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:41.715805    9726 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:41.716133    9726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:41.924531    9726 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:41.972376    9726 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:41.972391    9726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:41.972610    9726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:41.988062    9726 main.go:141] libmachine: STDOUT: 
	I1007 05:09:41.988094    9726 main.go:141] libmachine: STDERR: 
	I1007 05:09:41.988171    9726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2 +20000M
	I1007 05:09:42.000949    9726 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:42.000970    9726 main.go:141] libmachine: STDERR: 
	I1007 05:09:42.000985    9726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:42.000990    9726 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:42.001006    9726 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:42.001041    9726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:50:ba:ad:1d:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/flannel-842000/disk.qcow2
	I1007 05:09:42.003138    9726 main.go:141] libmachine: STDOUT: 
	I1007 05:09:42.003154    9726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:42.003167    9726 client.go:171] duration metric: took 287.576ms to LocalClient.Create
	I1007 05:09:44.005378    9726 start.go:128] duration metric: took 2.313355208s to createHost
	I1007 05:09:44.005656    9726 start.go:83] releasing machines lock for "flannel-842000", held for 2.31359925s
	W1007 05:09:44.006066    9726 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:44.015696    9726 out.go:201] 
	W1007 05:09:44.018710    9726 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:09:44.018738    9726 out.go:270] * 
	* 
	W1007 05:09:44.021388    9726 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:44.032695    9726 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.878687334s)

                                                
                                                
-- stdout --
	* [bridge-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-842000" primary control-plane node in "bridge-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:09:46.550225    9848 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:09:46.550373    9848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:46.550377    9848 out.go:358] Setting ErrFile to fd 2...
	I1007 05:09:46.550379    9848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:46.550495    9848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:09:46.551639    9848 out.go:352] Setting JSON to false
	I1007 05:09:46.569353    9848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5957,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:09:46.569445    9848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:09:46.574367    9848 out.go:177] * [bridge-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:09:46.582199    9848 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:09:46.582269    9848 notify.go:220] Checking for updates...
	I1007 05:09:46.589285    9848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:09:46.592281    9848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:09:46.595309    9848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:09:46.598237    9848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:09:46.601269    9848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:09:46.604597    9848 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:09:46.604674    9848 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:09:46.604718    9848 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:09:46.609274    9848 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:09:46.616222    9848 start.go:297] selected driver: qemu2
	I1007 05:09:46.616227    9848 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:09:46.616232    9848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:09:46.618673    9848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:09:46.622270    9848 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:09:46.625273    9848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:09:46.625303    9848 cni.go:84] Creating CNI manager for "bridge"
	I1007 05:09:46.625311    9848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:09:46.625344    9848 start.go:340] cluster config:
	{Name:bridge-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:09:46.630039    9848 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:09:46.638266    9848 out.go:177] * Starting "bridge-842000" primary control-plane node in "bridge-842000" cluster
	I1007 05:09:46.642250    9848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:09:46.642266    9848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:09:46.642276    9848 cache.go:56] Caching tarball of preloaded images
	I1007 05:09:46.642357    9848 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:09:46.642370    9848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:09:46.642442    9848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/bridge-842000/config.json ...
	I1007 05:09:46.642456    9848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/bridge-842000/config.json: {Name:mkdcb981c4bb99ec3ab7ed1eec7c13acd3f4f588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:09:46.642823    9848 start.go:360] acquireMachinesLock for bridge-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:46.642875    9848 start.go:364] duration metric: took 45.292µs to acquireMachinesLock for "bridge-842000"
	I1007 05:09:46.642887    9848 start.go:93] Provisioning new machine with config: &{Name:bridge-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:46.642923    9848 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:46.646336    9848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:46.663113    9848 start.go:159] libmachine.API.Create for "bridge-842000" (driver="qemu2")
	I1007 05:09:46.663143    9848 client.go:168] LocalClient.Create starting
	I1007 05:09:46.663209    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:46.663250    9848 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:46.663262    9848 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:46.663300    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:46.663329    9848 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:46.663337    9848 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:46.663793    9848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:46.808989    9848 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:46.928213    9848 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:46.928223    9848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:46.928418    9848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:46.938141    9848 main.go:141] libmachine: STDOUT: 
	I1007 05:09:46.938169    9848 main.go:141] libmachine: STDERR: 
	I1007 05:09:46.938231    9848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2 +20000M
	I1007 05:09:46.946842    9848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:46.946856    9848 main.go:141] libmachine: STDERR: 
	I1007 05:09:46.946872    9848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:46.946877    9848 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:46.946889    9848 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:46.946917    9848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:62:ba:63:e7:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:46.948771    9848 main.go:141] libmachine: STDOUT: 
	I1007 05:09:46.948783    9848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:46.948814    9848 client.go:171] duration metric: took 285.664417ms to LocalClient.Create
	I1007 05:09:48.950966    9848 start.go:128] duration metric: took 2.308004166s to createHost
	I1007 05:09:48.951032    9848 start.go:83] releasing machines lock for "bridge-842000", held for 2.308156833s
	W1007 05:09:48.951084    9848 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:48.961135    9848 out.go:177] * Deleting "bridge-842000" in qemu2 ...
	W1007 05:09:48.981314    9848 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:48.981329    9848 start.go:729] Will try again in 5 seconds ...
	I1007 05:09:53.983489    9848 start.go:360] acquireMachinesLock for bridge-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:53.984265    9848 start.go:364] duration metric: took 604.625µs to acquireMachinesLock for "bridge-842000"
	I1007 05:09:53.984342    9848 start.go:93] Provisioning new machine with config: &{Name:bridge-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:53.984777    9848 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:53.995399    9848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:54.045671    9848 start.go:159] libmachine.API.Create for "bridge-842000" (driver="qemu2")
	I1007 05:09:54.045726    9848 client.go:168] LocalClient.Create starting
	I1007 05:09:54.045912    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:54.046004    9848 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:54.046025    9848 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:54.046103    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:54.046168    9848 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:54.046183    9848 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:54.046851    9848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:54.201090    9848 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:54.339908    9848 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:54.339917    9848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:54.340115    9848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:54.350741    9848 main.go:141] libmachine: STDOUT: 
	I1007 05:09:54.350771    9848 main.go:141] libmachine: STDERR: 
	I1007 05:09:54.350829    9848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2 +20000M
	I1007 05:09:54.359297    9848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:54.359317    9848 main.go:141] libmachine: STDERR: 
	I1007 05:09:54.359334    9848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:54.359340    9848 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:54.359347    9848 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:54.359374    9848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:27:0a:16:06:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/bridge-842000/disk.qcow2
	I1007 05:09:54.361287    9848 main.go:141] libmachine: STDOUT: 
	I1007 05:09:54.361300    9848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:54.361316    9848 client.go:171] duration metric: took 315.584166ms to LocalClient.Create
	I1007 05:09:56.363375    9848 start.go:128] duration metric: took 2.378589542s to createHost
	I1007 05:09:56.363404    9848 start.go:83] releasing machines lock for "bridge-842000", held for 2.3791255s
	W1007 05:09:56.363503    9848 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:09:56.374684    9848 out.go:201] 
	W1007 05:09:56.378694    9848 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:09:56.378705    9848 out.go:270] * 
	* 
	W1007 05:09:56.379198    9848 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:09:56.386737    9848 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-842000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.910382375s)

                                                
                                                
-- stdout --
	* [kubenet-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-842000" primary control-plane node in "kubenet-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:09:58.860823    9960 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:09:58.860982    9960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:58.860987    9960 out.go:358] Setting ErrFile to fd 2...
	I1007 05:09:58.860990    9960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:09:58.861131    9960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:09:58.862420    9960 out.go:352] Setting JSON to false
	I1007 05:09:58.881937    9960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5969,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:09:58.882014    9960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:09:58.886362    9960 out.go:177] * [kubenet-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:09:58.893394    9960 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:09:58.893451    9960 notify.go:220] Checking for updates...
	I1007 05:09:58.901383    9960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:09:58.902715    9960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:09:58.905398    9960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:09:58.908386    9960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:09:58.911411    9960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:09:58.914741    9960 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:09:58.914795    9960 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:09:58.914846    9960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:09:58.919362    9960 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:09:58.928426    9960 start.go:297] selected driver: qemu2
	I1007 05:09:58.928436    9960 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:09:58.928445    9960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:09:58.931151    9960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:09:58.935390    9960 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:09:58.936868    9960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:09:58.936886    9960 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1007 05:09:58.936913    9960 start.go:340] cluster config:
	{Name:kubenet-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:09:58.941238    9960 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:09:58.949420    9960 out.go:177] * Starting "kubenet-842000" primary control-plane node in "kubenet-842000" cluster
	I1007 05:09:58.953361    9960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:09:58.953375    9960 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:09:58.953380    9960 cache.go:56] Caching tarball of preloaded images
	I1007 05:09:58.953448    9960 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:09:58.953453    9960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:09:58.953519    9960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kubenet-842000/config.json ...
	I1007 05:09:58.953529    9960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/kubenet-842000/config.json: {Name:mkcc2f6f4ea31538d370b46dbd68b9136ab4a1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:09:58.953803    9960 start.go:360] acquireMachinesLock for kubenet-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:09:58.953855    9960 start.go:364] duration metric: took 45.542µs to acquireMachinesLock for "kubenet-842000"
	I1007 05:09:58.953868    9960 start.go:93] Provisioning new machine with config: &{Name:kubenet-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:09:58.953896    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:09:58.957501    9960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:09:58.972033    9960 start.go:159] libmachine.API.Create for "kubenet-842000" (driver="qemu2")
	I1007 05:09:58.972072    9960 client.go:168] LocalClient.Create starting
	I1007 05:09:58.972145    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:09:58.972187    9960 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:58.972199    9960 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:58.972244    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:09:58.972273    9960 main.go:141] libmachine: Decoding PEM data...
	I1007 05:09:58.972279    9960 main.go:141] libmachine: Parsing certificate...
	I1007 05:09:58.972660    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:09:59.114012    9960 main.go:141] libmachine: Creating SSH key...
	I1007 05:09:59.142195    9960 main.go:141] libmachine: Creating Disk image...
	I1007 05:09:59.142200    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:09:59.142382    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:09:59.152564    9960 main.go:141] libmachine: STDOUT: 
	I1007 05:09:59.152589    9960 main.go:141] libmachine: STDERR: 
	I1007 05:09:59.152675    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2 +20000M
	I1007 05:09:59.161472    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:09:59.161489    9960 main.go:141] libmachine: STDERR: 
	I1007 05:09:59.161510    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:09:59.161517    9960 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:09:59.161528    9960 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:09:59.161567    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:2b:b2:e1:ad:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:09:59.163498    9960 main.go:141] libmachine: STDOUT: 
	I1007 05:09:59.163513    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:09:59.163535    9960 client.go:171] duration metric: took 191.457041ms to LocalClient.Create
	I1007 05:10:01.165814    9960 start.go:128] duration metric: took 2.211863041s to createHost
	I1007 05:10:01.165907    9960 start.go:83] releasing machines lock for "kubenet-842000", held for 2.212049125s
	W1007 05:10:01.165954    9960 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:01.181293    9960 out.go:177] * Deleting "kubenet-842000" in qemu2 ...
	W1007 05:10:01.206353    9960 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:01.206386    9960 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:06.207664    9960 start.go:360] acquireMachinesLock for kubenet-842000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:06.208118    9960 start.go:364] duration metric: took 372.709µs to acquireMachinesLock for "kubenet-842000"
	I1007 05:10:06.208259    9960 start.go:93] Provisioning new machine with config: &{Name:kubenet-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:06.208442    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:06.216762    9960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:10:06.253857    9960 start.go:159] libmachine.API.Create for "kubenet-842000" (driver="qemu2")
	I1007 05:10:06.253913    9960 client.go:168] LocalClient.Create starting
	I1007 05:10:06.254059    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:06.254142    9960 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:06.254159    9960 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:06.254236    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:06.254286    9960 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:06.254303    9960 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:06.254892    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:06.404957    9960 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:06.665979    9960 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:06.665994    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:06.666240    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:10:06.676929    9960 main.go:141] libmachine: STDOUT: 
	I1007 05:10:06.676949    9960 main.go:141] libmachine: STDERR: 
	I1007 05:10:06.677012    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2 +20000M
	I1007 05:10:06.685730    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:06.685746    9960 main.go:141] libmachine: STDERR: 
	I1007 05:10:06.685761    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:10:06.685766    9960 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:06.685776    9960 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:06.685815    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:89:89:8f:43:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/kubenet-842000/disk.qcow2
	I1007 05:10:06.687740    9960 main.go:141] libmachine: STDOUT: 
	I1007 05:10:06.687757    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:06.687770    9960 client.go:171] duration metric: took 433.853459ms to LocalClient.Create
	I1007 05:10:08.689984    9960 start.go:128] duration metric: took 2.481509417s to createHost
	I1007 05:10:08.690049    9960 start.go:83] releasing machines lock for "kubenet-842000", held for 2.481918875s
	W1007 05:10:08.690384    9960 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:08.702224    9960 out.go:201] 
	W1007 05:10:08.706243    9960 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:08.706291    9960 out.go:270] * 
	* 
	W1007 05:10:08.709164    9960 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:08.717102    9960 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.8061895s)

                                                
                                                
-- stdout --
	* [old-k8s-version-537000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:11.095234   10073 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:11.095388   10073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:11.095391   10073 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:11.095394   10073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:11.095525   10073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:11.096683   10073 out.go:352] Setting JSON to false
	I1007 05:10:11.114916   10073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5982,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:11.114984   10073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:11.119609   10073 out.go:177] * [old-k8s-version-537000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:11.126540   10073 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:11.126601   10073 notify.go:220] Checking for updates...
	I1007 05:10:11.133398   10073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:11.136501   10073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:11.139544   10073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:11.142440   10073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:11.145442   10073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:11.148884   10073 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:11.148961   10073 config.go:182] Loaded profile config "stopped-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:10:11.149000   10073 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:11.153498   10073 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:10:11.160508   10073 start.go:297] selected driver: qemu2
	I1007 05:10:11.160514   10073 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:10:11.160520   10073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:11.162964   10073 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:10:11.166450   10073 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:10:11.169631   10073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:11.169646   10073 cni.go:84] Creating CNI manager for ""
	I1007 05:10:11.169667   10073 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:10:11.169697   10073 start.go:340] cluster config:
	{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin
/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:11.174323   10073 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:11.182475   10073 out.go:177] * Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	I1007 05:10:11.186487   10073 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:10:11.186503   10073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:10:11.186512   10073 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:11.186587   10073 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:11.186593   10073 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:10:11.186655   10073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/old-k8s-version-537000/config.json ...
	I1007 05:10:11.186671   10073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/old-k8s-version-537000/config.json: {Name:mk7a872ae53cf8494ea76ddc8606672a2dff66b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:10:11.186910   10073 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:11.186955   10073 start.go:364] duration metric: took 39.25µs to acquireMachinesLock for "old-k8s-version-537000"
	I1007 05:10:11.186967   10073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:11.186993   10073 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:11.195481   10073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:11.211634   10073 start.go:159] libmachine.API.Create for "old-k8s-version-537000" (driver="qemu2")
	I1007 05:10:11.211662   10073 client.go:168] LocalClient.Create starting
	I1007 05:10:11.211734   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:11.211774   10073 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:11.211785   10073 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:11.211824   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:11.211860   10073 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:11.211873   10073 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:11.212247   10073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:11.355664   10073 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:11.441002   10073 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:11.441013   10073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:11.441188   10073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:11.451173   10073 main.go:141] libmachine: STDOUT: 
	I1007 05:10:11.451196   10073 main.go:141] libmachine: STDERR: 
	I1007 05:10:11.451259   10073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2 +20000M
	I1007 05:10:11.459849   10073 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:11.459864   10073 main.go:141] libmachine: STDERR: 
	I1007 05:10:11.459889   10073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:11.459895   10073 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:11.459907   10073 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:11.459936   10073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0c:bd:5a:04:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:11.461833   10073 main.go:141] libmachine: STDOUT: 
	I1007 05:10:11.461849   10073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:11.461871   10073 client.go:171] duration metric: took 250.205417ms to LocalClient.Create
	I1007 05:10:13.464128   10073 start.go:128] duration metric: took 2.277106917s to createHost
	I1007 05:10:13.464205   10073 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 2.277246458s
	W1007 05:10:13.464281   10073 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:13.477419   10073 out.go:177] * Deleting "old-k8s-version-537000" in qemu2 ...
	W1007 05:10:13.498793   10073 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:13.498828   10073 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:18.499191   10073 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:18.499743   10073 start.go:364] duration metric: took 416.834µs to acquireMachinesLock for "old-k8s-version-537000"
	I1007 05:10:18.499867   10073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:18.500090   10073 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:18.509715   10073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:18.559887   10073 start.go:159] libmachine.API.Create for "old-k8s-version-537000" (driver="qemu2")
	I1007 05:10:18.559936   10073 client.go:168] LocalClient.Create starting
	I1007 05:10:18.560069   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:18.560141   10073 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:18.560159   10073 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:18.560216   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:18.560275   10073 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:18.560289   10073 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:18.560858   10073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:18.716331   10073 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:18.801488   10073 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:18.801496   10073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:18.801677   10073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:18.812049   10073 main.go:141] libmachine: STDOUT: 
	I1007 05:10:18.812066   10073 main.go:141] libmachine: STDERR: 
	I1007 05:10:18.812140   10073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2 +20000M
	I1007 05:10:18.821046   10073 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:18.821066   10073 main.go:141] libmachine: STDERR: 
	I1007 05:10:18.821080   10073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:18.821089   10073 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:18.821099   10073 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:18.821127   10073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cb:b1:0a:6d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:18.823013   10073 main.go:141] libmachine: STDOUT: 
	I1007 05:10:18.823027   10073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:18.823042   10073 client.go:171] duration metric: took 263.100291ms to LocalClient.Create
	I1007 05:10:20.825240   10073 start.go:128] duration metric: took 2.325123959s to createHost
	I1007 05:10:20.825299   10073 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 2.325539333s
	W1007 05:10:20.825817   10073 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:20.836482   10073 out.go:201] 
	W1007 05:10:20.842606   10073 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:20.842696   10073 out.go:270] * 
	* 
	W1007 05:10:20.845447   10073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:20.854457   10073 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (72.049958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.807317583s)

                                                
                                                
-- stdout --
	* [no-preload-599000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-599000" primary control-plane node in "no-preload-599000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-599000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:14.733129   10086 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:14.733295   10086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:14.733298   10086 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:14.733300   10086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:14.733411   10086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:14.734584   10086 out.go:352] Setting JSON to false
	I1007 05:10:14.752552   10086 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5985,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:14.752619   10086 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:14.756364   10086 out.go:177] * [no-preload-599000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:14.763302   10086 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:14.763340   10086 notify.go:220] Checking for updates...
	I1007 05:10:14.770257   10086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:14.773268   10086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:14.776269   10086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:14.779225   10086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:14.782249   10086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:14.785661   10086 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:14.785747   10086 config.go:182] Loaded profile config "old-k8s-version-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:10:14.785807   10086 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:14.789207   10086 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:10:14.796232   10086 start.go:297] selected driver: qemu2
	I1007 05:10:14.796239   10086 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:10:14.796245   10086 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:14.798697   10086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:10:14.800235   10086 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:10:14.803308   10086 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:14.803328   10086 cni.go:84] Creating CNI manager for ""
	I1007 05:10:14.803362   10086 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:14.803374   10086 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:10:14.803400   10086 start.go:340] cluster config:
	{Name:no-preload-599000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:14.808096   10086 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.816225   10086 out.go:177] * Starting "no-preload-599000" primary control-plane node in "no-preload-599000" cluster
	I1007 05:10:14.820239   10086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:14.820355   10086 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/no-preload-599000/config.json ...
	I1007 05:10:14.820374   10086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/no-preload-599000/config.json: {Name:mk839bbd7242f88e46df7feeef2d07fe3516c266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:10:14.820377   10086 cache.go:107] acquiring lock: {Name:mkf4d7d0e210cfec46646868b33d8ac3b8550a66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820382   10086 cache.go:107] acquiring lock: {Name:mk2faa9ab705ed967ffa99f82a514544fd6d4ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820399   10086 cache.go:107] acquiring lock: {Name:mkc4d442531114768fb87ff22d853aef62e511a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820544   10086 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 05:10:14.820535   10086 cache.go:107] acquiring lock: {Name:mk12901308a81ec877d86ce3b8cede03e4b4516a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820566   10086 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 190.791µs
	I1007 05:10:14.820582   10086 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 05:10:14.820567   10086 cache.go:107] acquiring lock: {Name:mk237255dfd48a965795e13c2073abc31711ca68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820597   10086 cache.go:107] acquiring lock: {Name:mk9fd61c0a8f665c5ee740d7b8d6f55336ed1803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820599   10086 cache.go:107] acquiring lock: {Name:mka158b6580ee3652b55864f46fd753d7a5d46ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820618   10086 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 05:10:14.820663   10086 cache.go:107] acquiring lock: {Name:mkca30595545392e3814d82d78c6094202928208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:14.820689   10086 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 05:10:14.820782   10086 start.go:360] acquireMachinesLock for no-preload-599000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:14.820790   10086 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1007 05:10:14.820820   10086 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1007 05:10:14.820965   10086 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 05:10:14.820969   10086 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 05:10:14.820994   10086 start.go:364] duration metric: took 201.333µs to acquireMachinesLock for "no-preload-599000"
	I1007 05:10:14.821013   10086 start.go:93] Provisioning new machine with config: &{Name:no-preload-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:14.821044   10086 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:14.821121   10086 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 05:10:14.829228   10086 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:14.833541   10086 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1007 05:10:14.833596   10086 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 05:10:14.834579   10086 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 05:10:14.834891   10086 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1007 05:10:14.835195   10086 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 05:10:14.835278   10086 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 05:10:14.835299   10086 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 05:10:14.846925   10086 start.go:159] libmachine.API.Create for "no-preload-599000" (driver="qemu2")
	I1007 05:10:14.846943   10086 client.go:168] LocalClient.Create starting
	I1007 05:10:14.847021   10086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:14.847057   10086 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:14.847067   10086 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:14.847107   10086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:14.847136   10086 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:14.847144   10086 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:14.847495   10086 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:15.043231   10086 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:15.149914   10086 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:15.149931   10086 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:15.150120   10086 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:15.161139   10086 main.go:141] libmachine: STDOUT: 
	I1007 05:10:15.161168   10086 main.go:141] libmachine: STDERR: 
	I1007 05:10:15.161224   10086 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2 +20000M
	I1007 05:10:15.170107   10086 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:15.170126   10086 main.go:141] libmachine: STDERR: 
	I1007 05:10:15.170145   10086 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:15.170149   10086 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:15.170162   10086 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:15.170193   10086 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:3a:e7:2f:ca:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:15.172161   10086 main.go:141] libmachine: STDOUT: 
	I1007 05:10:15.172174   10086 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:15.172194   10086 client.go:171] duration metric: took 325.246583ms to LocalClient.Create
	I1007 05:10:15.287801   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1007 05:10:15.296783   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1007 05:10:15.339243   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1007 05:10:15.399660   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1007 05:10:15.481980   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1007 05:10:15.502313   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 05:10:15.502330   10086 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 681.818667ms
	I1007 05:10:15.502344   10086 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 05:10:15.502839   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1007 05:10:15.569145   10086 cache.go:162] opening:  /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1007 05:10:17.172685   10086 start.go:128] duration metric: took 2.351603625s to createHost
	I1007 05:10:17.172751   10086 start.go:83] releasing machines lock for "no-preload-599000", held for 2.351749542s
	W1007 05:10:17.172796   10086 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:17.179014   10086 out.go:177] * Deleting "no-preload-599000" in qemu2 ...
	W1007 05:10:17.204818   10086 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:17.204850   10086 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:18.162272   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 05:10:18.162351   10086 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.341834125s
	I1007 05:10:18.162398   10086 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 05:10:19.052967   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 05:10:19.053002   10086 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.232646167s
	I1007 05:10:19.053021   10086 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 05:10:19.482647   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 05:10:19.482699   10086 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.662111333s
	I1007 05:10:19.482726   10086 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 05:10:19.532681   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 05:10:19.532755   10086 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.712223042s
	I1007 05:10:19.532787   10086 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 05:10:19.970537   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 05:10:19.970586   10086 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.150210958s
	I1007 05:10:19.970634   10086 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 05:10:22.204996   10086 start.go:360] acquireMachinesLock for no-preload-599000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:22.205394   10086 start.go:364] duration metric: took 327.833µs to acquireMachinesLock for "no-preload-599000"
	I1007 05:10:22.205561   10086 start.go:93] Provisioning new machine with config: &{Name:no-preload-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:22.205810   10086 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:22.215471   10086 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:22.264584   10086 start.go:159] libmachine.API.Create for "no-preload-599000" (driver="qemu2")
	I1007 05:10:22.264632   10086 client.go:168] LocalClient.Create starting
	I1007 05:10:22.264751   10086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:22.264804   10086 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:22.264825   10086 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:22.264912   10086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:22.264943   10086 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:22.264962   10086 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:22.265547   10086 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:22.419602   10086 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:22.449618   10086 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:22.449624   10086 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:22.449813   10086 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:22.459996   10086 main.go:141] libmachine: STDOUT: 
	I1007 05:10:22.460011   10086 main.go:141] libmachine: STDERR: 
	I1007 05:10:22.460059   10086 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2 +20000M
	I1007 05:10:22.468692   10086 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:22.468715   10086 main.go:141] libmachine: STDERR: 
	I1007 05:10:22.468737   10086 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:22.468746   10086 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:22.468756   10086 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:22.468798   10086 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:18:46:0b:a1:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:22.470740   10086 main.go:141] libmachine: STDOUT: 
	I1007 05:10:22.470761   10086 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:22.470775   10086 client.go:171] duration metric: took 206.139167ms to LocalClient.Create
	I1007 05:10:23.400120   10086 cache.go:157] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 05:10:23.400210   10086 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.579719s
	I1007 05:10:23.400244   10086 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 05:10:23.400317   10086 cache.go:87] Successfully saved all images to host disk.
	I1007 05:10:24.472895   10086 start.go:128] duration metric: took 2.267054666s to createHost
	I1007 05:10:24.472965   10086 start.go:83] releasing machines lock for "no-preload-599000", held for 2.267554834s
	W1007 05:10:24.473167   10086 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-599000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-599000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:24.481974   10086 out.go:201] 
	W1007 05:10:24.486080   10086 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:24.486097   10086 out.go:270] * 
	* 
	W1007 05:10:24.487111   10086 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:24.497826   10086 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (41.067459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml: exit status 1 (28.959542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.439417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.332333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-537000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system: exit status 1 (27.167542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.503375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-599000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-599000 create -f testdata/busybox.yaml: exit status 1 (28.911833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-599000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-599000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (34.532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (34.669667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.221869166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-537000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:24.629093   10171 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:24.629277   10171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:24.629280   10171 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:24.629282   10171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:24.629428   10171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:24.630929   10171 out.go:352] Setting JSON to false
	I1007 05:10:24.652386   10171 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5995,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:24.652470   10171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:24.657035   10171 out.go:177] * [old-k8s-version-537000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:24.664006   10171 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:24.664029   10171 notify.go:220] Checking for updates...
	I1007 05:10:24.667976   10171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:24.671033   10171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:24.674041   10171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:24.676985   10171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:24.683995   10171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:24.688199   10171 config.go:182] Loaded profile config "old-k8s-version-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:10:24.695963   10171 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:10:24.703046   10171 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:24.711012   10171 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:10:24.722997   10171 start.go:297] selected driver: qemu2
	I1007 05:10:24.723004   10171 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:24.723068   10171 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:24.726063   10171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:24.726091   10171 cni.go:84] Creating CNI manager for ""
	I1007 05:10:24.726113   10171 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:10:24.726136   10171 start.go:340] cluster config:
	{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:24.730986   10171 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:24.732654   10171 out.go:177] * Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	I1007 05:10:24.741035   10171 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:10:24.741071   10171 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:10:24.741081   10171 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:24.741192   10171 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:24.741198   10171 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:10:24.741259   10171 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/old-k8s-version-537000/config.json ...
	I1007 05:10:24.741605   10171 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:24.741640   10171 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "old-k8s-version-537000"
	I1007 05:10:24.741649   10171 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:24.741655   10171 fix.go:54] fixHost starting: 
	I1007 05:10:24.741777   10171 fix.go:112] recreateIfNeeded on old-k8s-version-537000: state=Stopped err=<nil>
	W1007 05:10:24.741788   10171 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:24.745970   10171 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	I1007 05:10:24.753952   10171 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:24.753988   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cb:b1:0a:6d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:24.756218   10171 main.go:141] libmachine: STDOUT: 
	I1007 05:10:24.756234   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:24.756266   10171 fix.go:56] duration metric: took 14.60875ms for fixHost
	I1007 05:10:24.756272   10171 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 14.627792ms
	W1007 05:10:24.756278   10171 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:24.756339   10171 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:24.756344   10171 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:29.758536   10171 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:29.758954   10171 start.go:364] duration metric: took 313.083µs to acquireMachinesLock for "old-k8s-version-537000"
	I1007 05:10:29.759078   10171 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:29.759098   10171 fix.go:54] fixHost starting: 
	I1007 05:10:29.759849   10171 fix.go:112] recreateIfNeeded on old-k8s-version-537000: state=Stopped err=<nil>
	W1007 05:10:29.759878   10171 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:29.768238   10171 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	I1007 05:10:29.771258   10171 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:29.771506   10171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cb:b1:0a:6d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I1007 05:10:29.781702   10171 main.go:141] libmachine: STDOUT: 
	I1007 05:10:29.781785   10171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:29.781878   10171 fix.go:56] duration metric: took 22.781084ms for fixHost
	I1007 05:10:29.781907   10171 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 22.930334ms
	W1007 05:10:29.782147   10171 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:29.790222   10171 out.go:201] 
	W1007 05:10:29.793358   10171 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:29.793414   10171 out.go:270] * 
	* 
	W1007 05:10:29.796275   10171 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:29.807251   10171 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (70.175208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-599000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-599000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-599000 describe deploy/metrics-server -n kube-system: exit status 1 (27.952458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-599000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-599000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (32.78425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.824961375s)

                                                
                                                
-- stdout --
	* [no-preload-599000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-599000" primary control-plane node in "no-preload-599000" cluster
	* Restarting existing qemu2 VM for "no-preload-599000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-599000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:27.037363   10202 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:27.037527   10202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:27.037530   10202 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:27.037533   10202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:27.037666   10202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:27.038761   10202 out.go:352] Setting JSON to false
	I1007 05:10:27.056312   10202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5998,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:27.056388   10202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:27.060800   10202 out.go:177] * [no-preload-599000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:27.066823   10202 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:27.066865   10202 notify.go:220] Checking for updates...
	I1007 05:10:27.073851   10202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:27.076730   10202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:27.079819   10202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:27.082846   10202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:27.084295   10202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:27.088162   10202 config.go:182] Loaded profile config "no-preload-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:27.088485   10202 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:27.092809   10202 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:10:27.097829   10202 start.go:297] selected driver: qemu2
	I1007 05:10:27.097835   10202 start.go:901] validating driver "qemu2" against &{Name:no-preload-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:no-preload-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:27.097890   10202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:27.100279   10202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:27.100304   10202 cni.go:84] Creating CNI manager for ""
	I1007 05:10:27.100329   10202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:27.100350   10202 start.go:340] cluster config:
	{Name:no-preload-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-599000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:27.104819   10202 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.112804   10202 out.go:177] * Starting "no-preload-599000" primary control-plane node in "no-preload-599000" cluster
	I1007 05:10:27.116851   10202 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:27.116969   10202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/no-preload-599000/config.json ...
	I1007 05:10:27.116985   10202 cache.go:107] acquiring lock: {Name:mkf4d7d0e210cfec46646868b33d8ac3b8550a66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117019   10202 cache.go:107] acquiring lock: {Name:mk12901308a81ec877d86ce3b8cede03e4b4516a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117016   10202 cache.go:107] acquiring lock: {Name:mkc4d442531114768fb87ff22d853aef62e511a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117086   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 05:10:27.117093   10202 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.333µs
	I1007 05:10:27.117101   10202 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 05:10:27.116985   10202 cache.go:107] acquiring lock: {Name:mk2faa9ab705ed967ffa99f82a514544fd6d4ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117122   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 05:10:27.117160   10202 cache.go:107] acquiring lock: {Name:mk9fd61c0a8f665c5ee740d7b8d6f55336ed1803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117177   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 05:10:27.117185   10202 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 218.5µs
	I1007 05:10:27.117189   10202 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 05:10:27.117155   10202 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 131.917µs
	I1007 05:10:27.117204   10202 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 05:10:27.117126   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 05:10:27.117215   10202 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 226.75µs
	I1007 05:10:27.117219   10202 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 05:10:27.117214   10202 cache.go:107] acquiring lock: {Name:mkca30595545392e3814d82d78c6094202928208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117199   10202 cache.go:107] acquiring lock: {Name:mka158b6580ee3652b55864f46fd753d7a5d46ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117169   10202 cache.go:107] acquiring lock: {Name:mk237255dfd48a965795e13c2073abc31711ca68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:27.117250   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 05:10:27.117259   10202 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 136.875µs
	I1007 05:10:27.117264   10202 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 05:10:27.117338   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 05:10:27.117343   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 05:10:27.117347   10202 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 242.25µs
	I1007 05:10:27.117351   10202 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 233.167µs
	I1007 05:10:27.117355   10202 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 05:10:27.117356   10202 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 05:10:27.117338   10202 cache.go:115] /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 05:10:27.117363   10202 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 206.25µs
	I1007 05:10:27.117366   10202 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 05:10:27.117385   10202 cache.go:87] Successfully saved all images to host disk.
	I1007 05:10:27.117439   10202 start.go:360] acquireMachinesLock for no-preload-599000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:27.117475   10202 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "no-preload-599000"
	I1007 05:10:27.117487   10202 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:27.117490   10202 fix.go:54] fixHost starting: 
	I1007 05:10:27.117626   10202 fix.go:112] recreateIfNeeded on no-preload-599000: state=Stopped err=<nil>
	W1007 05:10:27.117638   10202 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:27.125821   10202 out.go:177] * Restarting existing qemu2 VM for "no-preload-599000" ...
	I1007 05:10:27.129814   10202 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:27.129859   10202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:18:46:0b:a1:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:27.132044   10202 main.go:141] libmachine: STDOUT: 
	I1007 05:10:27.132067   10202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:27.132094   10202 fix.go:56] duration metric: took 14.600542ms for fixHost
	I1007 05:10:27.132098   10202 start.go:83] releasing machines lock for "no-preload-599000", held for 14.619375ms
	W1007 05:10:27.132103   10202 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:27.132132   10202 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:27.132137   10202 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:32.134329   10202 start.go:360] acquireMachinesLock for no-preload-599000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:32.748354   10202 start.go:364] duration metric: took 613.909583ms to acquireMachinesLock for "no-preload-599000"
	I1007 05:10:32.748439   10202 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:32.748459   10202 fix.go:54] fixHost starting: 
	I1007 05:10:32.749168   10202 fix.go:112] recreateIfNeeded on no-preload-599000: state=Stopped err=<nil>
	W1007 05:10:32.749198   10202 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:32.754613   10202 out.go:177] * Restarting existing qemu2 VM for "no-preload-599000" ...
	I1007 05:10:32.769167   10202 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:32.769383   10202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:18:46:0b:a1:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/no-preload-599000/disk.qcow2
	I1007 05:10:32.782285   10202 main.go:141] libmachine: STDOUT: 
	I1007 05:10:32.782347   10202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:32.782426   10202 fix.go:56] duration metric: took 33.968334ms for fixHost
	I1007 05:10:32.782447   10202 start.go:83] releasing machines lock for "no-preload-599000", held for 34.039875ms
	W1007 05:10:32.782695   10202 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-599000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-599000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:32.791549   10202 out.go:201] 
	W1007 05:10:32.797756   10202 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:32.797791   10202 out.go:270] * 
	* 
	W1007 05:10:32.800022   10202 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:32.811678   10202 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-599000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (73.769542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-537000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (34.114916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-537000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.134084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.60925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-537000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.531042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1: exit status 83 (44.766958ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-537000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:30.092475   10221 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:30.092875   10221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:30.092878   10221 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:30.092881   10221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:30.093062   10221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:30.093273   10221 out.go:352] Setting JSON to false
	I1007 05:10:30.093282   10221 mustload.go:65] Loading cluster: old-k8s-version-537000
	I1007 05:10:30.093515   10221 config.go:182] Loaded profile config "old-k8s-version-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:10:30.097365   10221 out.go:177] * The control-plane node old-k8s-version-537000 host is not running: state=Stopped
	I1007 05:10:30.101389   10221 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-537000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.178958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (33.648791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.824124833s)

                                                
                                                
-- stdout --
	* [embed-certs-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-430000" primary control-plane node in "embed-certs-430000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-430000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:30.431084   10238 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:30.431234   10238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:30.431238   10238 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:30.431241   10238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:30.431396   10238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:30.432554   10238 out.go:352] Setting JSON to false
	I1007 05:10:30.450252   10238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6001,"bootTime":1728297029,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:30.450322   10238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:30.455417   10238 out.go:177] * [embed-certs-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:30.462398   10238 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:30.462437   10238 notify.go:220] Checking for updates...
	I1007 05:10:30.469349   10238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:30.472388   10238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:30.475400   10238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:30.478434   10238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:30.481346   10238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:30.484666   10238 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:30.484735   10238 config.go:182] Loaded profile config "no-preload-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:30.484789   10238 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:30.489409   10238 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:10:30.496398   10238 start.go:297] selected driver: qemu2
	I1007 05:10:30.496404   10238 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:10:30.496413   10238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:30.498938   10238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:10:30.502339   10238 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:10:30.505448   10238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:30.505478   10238 cni.go:84] Creating CNI manager for ""
	I1007 05:10:30.505508   10238 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:30.505516   10238 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:10:30.505542   10238 start.go:340] cluster config:
	{Name:embed-certs-430000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:30.510412   10238 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:30.514366   10238 out.go:177] * Starting "embed-certs-430000" primary control-plane node in "embed-certs-430000" cluster
	I1007 05:10:30.518227   10238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:30.518251   10238 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:10:30.518258   10238 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:30.518333   10238 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:30.518345   10238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:10:30.518411   10238 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/embed-certs-430000/config.json ...
	I1007 05:10:30.518422   10238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/embed-certs-430000/config.json: {Name:mk768dc737845368c584597e4317e993fa47097c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:10:30.518727   10238 start.go:360] acquireMachinesLock for embed-certs-430000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:30.518775   10238 start.go:364] duration metric: took 41.875µs to acquireMachinesLock for "embed-certs-430000"
	I1007 05:10:30.518787   10238 start.go:93] Provisioning new machine with config: &{Name:embed-certs-430000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:30.518824   10238 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:30.522474   10238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:30.539232   10238 start.go:159] libmachine.API.Create for "embed-certs-430000" (driver="qemu2")
	I1007 05:10:30.539257   10238 client.go:168] LocalClient.Create starting
	I1007 05:10:30.539336   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:30.539376   10238 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:30.539390   10238 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:30.539434   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:30.539465   10238 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:30.539482   10238 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:30.539846   10238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:30.682968   10238 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:30.725479   10238 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:30.725485   10238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:30.725668   10238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:30.735558   10238 main.go:141] libmachine: STDOUT: 
	I1007 05:10:30.735590   10238 main.go:141] libmachine: STDERR: 
	I1007 05:10:30.735654   10238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2 +20000M
	I1007 05:10:30.744023   10238 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:30.744036   10238 main.go:141] libmachine: STDERR: 
	I1007 05:10:30.744050   10238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:30.744055   10238 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:30.744066   10238 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:30.744103   10238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:65:9a:17:4a:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:30.745864   10238 main.go:141] libmachine: STDOUT: 
	I1007 05:10:30.745879   10238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:30.745899   10238 client.go:171] duration metric: took 206.634ms to LocalClient.Create
	I1007 05:10:32.748121   10238 start.go:128] duration metric: took 2.229279875s to createHost
	I1007 05:10:32.748248   10238 start.go:83] releasing machines lock for "embed-certs-430000", held for 2.22946925s
	W1007 05:10:32.748296   10238 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:32.767675   10238 out.go:177] * Deleting "embed-certs-430000" in qemu2 ...
	W1007 05:10:32.807975   10238 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:32.808009   10238 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:37.810238   10238 start.go:360] acquireMachinesLock for embed-certs-430000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:37.810747   10238 start.go:364] duration metric: took 414.459µs to acquireMachinesLock for "embed-certs-430000"
	I1007 05:10:37.810871   10238 start.go:93] Provisioning new machine with config: &{Name:embed-certs-430000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:37.811093   10238 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:37.819646   10238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:37.869092   10238 start.go:159] libmachine.API.Create for "embed-certs-430000" (driver="qemu2")
	I1007 05:10:37.869156   10238 client.go:168] LocalClient.Create starting
	I1007 05:10:37.869300   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:37.869384   10238 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:37.869402   10238 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:37.869480   10238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:37.869539   10238 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:37.869553   10238 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:37.870128   10238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:38.028827   10238 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:38.158631   10238 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:38.158638   10238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:38.158850   10238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:38.169065   10238 main.go:141] libmachine: STDOUT: 
	I1007 05:10:38.169100   10238 main.go:141] libmachine: STDERR: 
	I1007 05:10:38.169165   10238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2 +20000M
	I1007 05:10:38.177560   10238 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:38.177573   10238 main.go:141] libmachine: STDERR: 
	I1007 05:10:38.177585   10238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:38.177590   10238 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:38.177613   10238 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:38.177641   10238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8a:89:ac:12:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:38.179438   10238 main.go:141] libmachine: STDOUT: 
	I1007 05:10:38.179452   10238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:38.179464   10238 client.go:171] duration metric: took 310.303584ms to LocalClient.Create
	I1007 05:10:40.181640   10238 start.go:128] duration metric: took 2.370524s to createHost
	I1007 05:10:40.181743   10238 start.go:83] releasing machines lock for "embed-certs-430000", held for 2.37097775s
	W1007 05:10:40.182091   10238 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:40.191754   10238 out.go:201] 
	W1007 05:10:40.196732   10238 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:40.196768   10238 out.go:270] * 
	* 
	W1007 05:10:40.199652   10238 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:40.208684   10238 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (70.040292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-599000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (35.338667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-599000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-599000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-599000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.391208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-599000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-599000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (32.127291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-599000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (32.623375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-599000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-599000 --alsologtostderr -v=1: exit status 83 (44.048791ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-599000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-599000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:33.108278   10260 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:33.108459   10260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:33.108463   10260 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:33.108465   10260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:33.108594   10260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:33.108836   10260 out.go:352] Setting JSON to false
	I1007 05:10:33.108845   10260 mustload.go:65] Loading cluster: no-preload-599000
	I1007 05:10:33.109088   10260 config.go:182] Loaded profile config "no-preload-599000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:33.113600   10260 out.go:177] * The control-plane node no-preload-599000 host is not running: state=Stopped
	I1007 05:10:33.117597   10260 out.go:177]   To start a cluster, run: "minikube start -p no-preload-599000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-599000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (33.769959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (33.4345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-599000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.807828208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-717000" primary control-plane node in "default-k8s-diff-port-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:33.562181   10284 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:33.562344   10284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:33.562347   10284 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:33.562350   10284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:33.562492   10284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:33.563677   10284 out.go:352] Setting JSON to false
	I1007 05:10:33.581340   10284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6004,"bootTime":1728297029,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:33.581418   10284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:33.586624   10284 out.go:177] * [default-k8s-diff-port-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:33.592526   10284 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:33.592606   10284 notify.go:220] Checking for updates...
	I1007 05:10:33.599591   10284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:33.602541   10284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:33.605633   10284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:33.608602   10284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:33.610106   10284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:33.613947   10284 config.go:182] Loaded profile config "embed-certs-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:33.614006   10284 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:33.614060   10284 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:33.618618   10284 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:10:33.624561   10284 start.go:297] selected driver: qemu2
	I1007 05:10:33.624567   10284 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:10:33.624573   10284 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:33.627070   10284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:10:33.630550   10284 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:10:33.633729   10284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:33.633758   10284 cni.go:84] Creating CNI manager for ""
	I1007 05:10:33.633780   10284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:33.633788   10284 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:10:33.633821   10284 start.go:340] cluster config:
	{Name:default-k8s-diff-port-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:33.638444   10284 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:33.646573   10284 out.go:177] * Starting "default-k8s-diff-port-717000" primary control-plane node in "default-k8s-diff-port-717000" cluster
	I1007 05:10:33.650551   10284 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:33.650565   10284 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:10:33.650573   10284 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:33.650651   10284 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:33.650657   10284 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:10:33.650716   10284 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/default-k8s-diff-port-717000/config.json ...
	I1007 05:10:33.650728   10284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/default-k8s-diff-port-717000/config.json: {Name:mkc6ae7a29ebc8eaea24f2a585a9a434ac4696de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:10:33.651092   10284 start.go:360] acquireMachinesLock for default-k8s-diff-port-717000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:33.651145   10284 start.go:364] duration metric: took 44.834µs to acquireMachinesLock for "default-k8s-diff-port-717000"
	I1007 05:10:33.651158   10284 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:33.651186   10284 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:33.658532   10284 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:33.676330   10284 start.go:159] libmachine.API.Create for "default-k8s-diff-port-717000" (driver="qemu2")
	I1007 05:10:33.676360   10284 client.go:168] LocalClient.Create starting
	I1007 05:10:33.676430   10284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:33.676481   10284 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:33.676497   10284 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:33.676543   10284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:33.676575   10284 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:33.676584   10284 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:33.677026   10284 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:33.818105   10284 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:33.953687   10284 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:33.953698   10284 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:33.953884   10284 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:33.963810   10284 main.go:141] libmachine: STDOUT: 
	I1007 05:10:33.963827   10284 main.go:141] libmachine: STDERR: 
	I1007 05:10:33.963882   10284 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2 +20000M
	I1007 05:10:33.972289   10284 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:33.972305   10284 main.go:141] libmachine: STDERR: 
	I1007 05:10:33.972327   10284 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:33.972338   10284 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:33.972360   10284 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:33.972390   10284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c3:de:fb:d0:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:33.974215   10284 main.go:141] libmachine: STDOUT: 
	I1007 05:10:33.974229   10284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:33.974248   10284 client.go:171] duration metric: took 297.883083ms to LocalClient.Create
	I1007 05:10:35.976424   10284 start.go:128] duration metric: took 2.325222833s to createHost
	I1007 05:10:35.976499   10284 start.go:83] releasing machines lock for "default-k8s-diff-port-717000", held for 2.325350958s
	W1007 05:10:35.976541   10284 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:35.983632   10284 out.go:177] * Deleting "default-k8s-diff-port-717000" in qemu2 ...
	W1007 05:10:36.012026   10284 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:36.012048   10284 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:41.014183   10284 start.go:360] acquireMachinesLock for default-k8s-diff-port-717000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:41.014439   10284 start.go:364] duration metric: took 187.375µs to acquireMachinesLock for "default-k8s-diff-port-717000"
	I1007 05:10:41.014531   10284 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:41.014688   10284 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:41.023135   10284 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:41.061618   10284 start.go:159] libmachine.API.Create for "default-k8s-diff-port-717000" (driver="qemu2")
	I1007 05:10:41.061667   10284 client.go:168] LocalClient.Create starting
	I1007 05:10:41.061748   10284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:41.061801   10284 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:41.061817   10284 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:41.061870   10284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:41.061899   10284 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:41.061913   10284 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:41.062531   10284 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:41.222055   10284 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:41.275595   10284 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:41.275600   10284 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:41.275792   10284 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:41.285790   10284 main.go:141] libmachine: STDOUT: 
	I1007 05:10:41.285805   10284 main.go:141] libmachine: STDERR: 
	I1007 05:10:41.285870   10284 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2 +20000M
	I1007 05:10:41.294289   10284 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:41.294302   10284 main.go:141] libmachine: STDERR: 
	I1007 05:10:41.294314   10284 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:41.294327   10284 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:41.294338   10284 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:41.294374   10284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cf:0a:08:d8:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:41.296126   10284 main.go:141] libmachine: STDOUT: 
	I1007 05:10:41.296138   10284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:41.296162   10284 client.go:171] duration metric: took 234.480833ms to LocalClient.Create
	I1007 05:10:43.298341   10284 start.go:128] duration metric: took 2.283634875s to createHost
	I1007 05:10:43.298405   10284 start.go:83] releasing machines lock for "default-k8s-diff-port-717000", held for 2.283955833s
	W1007 05:10:43.298758   10284 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:43.309407   10284 out.go:201] 
	W1007 05:10:43.313537   10284 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:43.313584   10284 out.go:270] * 
	* 
	W1007 05:10:43.316128   10284 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:43.325452   10284 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (69.931541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-430000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-430000 create -f testdata/busybox.yaml: exit status 1 (28.831584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-430000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-430000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (33.505584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (32.300666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-430000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-430000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-430000 describe deploy/metrics-server -n kube-system: exit status 1 (26.902125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-430000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-430000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (33.009625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-717000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717000 create -f testdata/busybox.yaml: exit status 1 (28.458959ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-717000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.322167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-717000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-717000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717000 describe deploy/metrics-server -n kube-system: exit status 1 (27.185958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-717000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.428375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.192417917s)

                                                
                                                
-- stdout --
	* [embed-certs-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-430000" primary control-plane node in "embed-certs-430000" cluster
	* Restarting existing qemu2 VM for "embed-certs-430000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-430000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:44.328692   10354 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:44.328847   10354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:44.328850   10354 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:44.328852   10354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:44.328980   10354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:44.329996   10354 out.go:352] Setting JSON to false
	I1007 05:10:44.347539   10354 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6015,"bootTime":1728297029,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:44.347610   10354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:44.352588   10354 out.go:177] * [embed-certs-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:44.359654   10354 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:44.359720   10354 notify.go:220] Checking for updates...
	I1007 05:10:44.366639   10354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:44.369820   10354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:44.372662   10354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:44.375580   10354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:44.378666   10354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:44.381879   10354 config.go:182] Loaded profile config "embed-certs-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:44.382128   10354 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:44.386604   10354 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:10:44.393527   10354 start.go:297] selected driver: qemu2
	I1007 05:10:44.393532   10354 start.go:901] validating driver "qemu2" against &{Name:embed-certs-430000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:embed-certs-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:44.393586   10354 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:44.396069   10354 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:44.396093   10354 cni.go:84] Creating CNI manager for ""
	I1007 05:10:44.396114   10354 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:44.396142   10354 start.go:340] cluster config:
	{Name:embed-certs-430000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-430000 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:44.400362   10354 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:44.407546   10354 out.go:177] * Starting "embed-certs-430000" primary control-plane node in "embed-certs-430000" cluster
	I1007 05:10:44.411592   10354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:44.411606   10354 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:10:44.411617   10354 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:44.411687   10354 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:44.411692   10354 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:10:44.411761   10354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/embed-certs-430000/config.json ...
	I1007 05:10:44.412207   10354 start.go:360] acquireMachinesLock for embed-certs-430000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:44.412236   10354 start.go:364] duration metric: took 22.917µs to acquireMachinesLock for "embed-certs-430000"
	I1007 05:10:44.412245   10354 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:44.412250   10354 fix.go:54] fixHost starting: 
	I1007 05:10:44.412361   10354 fix.go:112] recreateIfNeeded on embed-certs-430000: state=Stopped err=<nil>
	W1007 05:10:44.412370   10354 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:44.419563   10354 out.go:177] * Restarting existing qemu2 VM for "embed-certs-430000" ...
	I1007 05:10:44.423622   10354 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:44.423669   10354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8a:89:ac:12:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:44.425644   10354 main.go:141] libmachine: STDOUT: 
	I1007 05:10:44.425662   10354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:44.425691   10354 fix.go:56] duration metric: took 13.440667ms for fixHost
	I1007 05:10:44.425695   10354 start.go:83] releasing machines lock for "embed-certs-430000", held for 13.4555ms
	W1007 05:10:44.425701   10354 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:44.425749   10354 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:44.425753   10354 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:49.426918   10354 start.go:360] acquireMachinesLock for embed-certs-430000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:49.427272   10354 start.go:364] duration metric: took 273.375µs to acquireMachinesLock for "embed-certs-430000"
	I1007 05:10:49.427397   10354 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:49.427416   10354 fix.go:54] fixHost starting: 
	I1007 05:10:49.428215   10354 fix.go:112] recreateIfNeeded on embed-certs-430000: state=Stopped err=<nil>
	W1007 05:10:49.428243   10354 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:49.437791   10354 out.go:177] * Restarting existing qemu2 VM for "embed-certs-430000" ...
	I1007 05:10:49.441793   10354 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:49.442007   10354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8a:89:ac:12:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/embed-certs-430000/disk.qcow2
	I1007 05:10:49.452451   10354 main.go:141] libmachine: STDOUT: 
	I1007 05:10:49.452548   10354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:49.452635   10354 fix.go:56] duration metric: took 25.218917ms for fixHost
	I1007 05:10:49.452660   10354 start.go:83] releasing machines lock for "embed-certs-430000", held for 25.364625ms
	W1007 05:10:49.452900   10354 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-430000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-430000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:49.460736   10354 out.go:201] 
	W1007 05:10:49.464838   10354 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:49.464877   10354 out.go:270] * 
	* 
	W1007 05:10:49.467524   10354 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:49.475686   10354 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-430000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (71.162958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.200925209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-717000" primary control-plane node in "default-k8s-diff-port-717000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:47.640154   10378 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:47.640309   10378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:47.640312   10378 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:47.640314   10378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:47.640441   10378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:47.641499   10378 out.go:352] Setting JSON to false
	I1007 05:10:47.659127   10378 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6018,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:47.659209   10378 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:47.664522   10378 out.go:177] * [default-k8s-diff-port-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:47.672732   10378 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:47.672790   10378 notify.go:220] Checking for updates...
	I1007 05:10:47.679674   10378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:47.683658   10378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:47.686721   10378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:47.689685   10378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:47.692690   10378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:47.695969   10378 config.go:182] Loaded profile config "default-k8s-diff-port-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:47.696260   10378 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:47.700673   10378 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:10:47.707658   10378 start.go:297] selected driver: qemu2
	I1007 05:10:47.707665   10378 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:47.707738   10378 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:47.710327   10378 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:10:47.710350   10378 cni.go:84] Creating CNI manager for ""
	I1007 05:10:47.710376   10378 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:47.710395   10378 start.go:340] cluster config:
	{Name:default-k8s-diff-port-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-717000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:47.715032   10378 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:47.721620   10378 out.go:177] * Starting "default-k8s-diff-port-717000" primary control-plane node in "default-k8s-diff-port-717000" cluster
	I1007 05:10:47.725691   10378 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:47.725713   10378 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:10:47.725721   10378 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:47.725806   10378 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:47.725812   10378 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:10:47.725884   10378 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/default-k8s-diff-port-717000/config.json ...
	I1007 05:10:47.726290   10378 start.go:360] acquireMachinesLock for default-k8s-diff-port-717000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:47.726340   10378 start.go:364] duration metric: took 42.417µs to acquireMachinesLock for "default-k8s-diff-port-717000"
	I1007 05:10:47.726350   10378 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:47.726355   10378 fix.go:54] fixHost starting: 
	I1007 05:10:47.726470   10378 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717000: state=Stopped err=<nil>
	W1007 05:10:47.726480   10378 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:47.730645   10378 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-717000" ...
	I1007 05:10:47.738713   10378 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:47.738767   10378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cf:0a:08:d8:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:47.741045   10378 main.go:141] libmachine: STDOUT: 
	I1007 05:10:47.741066   10378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:47.741098   10378 fix.go:56] duration metric: took 14.741792ms for fixHost
	I1007 05:10:47.741103   10378 start.go:83] releasing machines lock for "default-k8s-diff-port-717000", held for 14.757167ms
	W1007 05:10:47.741108   10378 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:47.741145   10378 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:47.741149   10378 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:52.743307   10378 start.go:360] acquireMachinesLock for default-k8s-diff-port-717000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:52.743785   10378 start.go:364] duration metric: took 372µs to acquireMachinesLock for "default-k8s-diff-port-717000"
	I1007 05:10:52.743952   10378 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:10:52.743971   10378 fix.go:54] fixHost starting: 
	I1007 05:10:52.744724   10378 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717000: state=Stopped err=<nil>
	W1007 05:10:52.744754   10378 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:10:52.758222   10378 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-717000" ...
	I1007 05:10:52.762129   10378 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:52.762321   10378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cf:0a:08:d8:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/default-k8s-diff-port-717000/disk.qcow2
	I1007 05:10:52.772981   10378 main.go:141] libmachine: STDOUT: 
	I1007 05:10:52.773033   10378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:52.773128   10378 fix.go:56] duration metric: took 29.152958ms for fixHost
	I1007 05:10:52.773145   10378 start.go:83] releasing machines lock for "default-k8s-diff-port-717000", held for 29.336917ms
	W1007 05:10:52.773377   10378 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:52.781916   10378 out.go:201] 
	W1007 05:10:52.785174   10378 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:52.785196   10378 out.go:270] * 
	* 
	W1007 05:10:52.787681   10378 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:52.796178   10378 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-717000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (71.436584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-430000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (34.853042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-430000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-430000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-430000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.226667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-430000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-430000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (33.17475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-430000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (33.212167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-430000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-430000 --alsologtostderr -v=1: exit status 83 (45.118959ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-430000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-430000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:49.766519   10397 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:49.766724   10397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:49.766727   10397 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:49.766729   10397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:49.766862   10397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:49.767081   10397 out.go:352] Setting JSON to false
	I1007 05:10:49.767089   10397 mustload.go:65] Loading cluster: embed-certs-430000
	I1007 05:10:49.767328   10397 config.go:182] Loaded profile config "embed-certs-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:49.771451   10397 out.go:177] * The control-plane node embed-certs-430000 host is not running: state=Stopped
	I1007 05:10:49.775452   10397 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-430000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-430000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (33.28825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (32.902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-430000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.766068416s)

                                                
                                                
-- stdout --
	* [newest-cni-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-458000" primary control-plane node in "newest-cni-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:50.099658   10414 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:50.099815   10414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:50.099819   10414 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:50.099821   10414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:50.099960   10414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:50.101155   10414 out.go:352] Setting JSON to false
	I1007 05:10:50.118713   10414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6021,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:10:50.118792   10414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:10:50.123535   10414 out.go:177] * [newest-cni-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:10:50.130450   10414 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:10:50.130511   10414 notify.go:220] Checking for updates...
	I1007 05:10:50.136416   10414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:10:50.139430   10414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:10:50.142450   10414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:10:50.145317   10414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:10:50.148410   10414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:10:50.153923   10414 config.go:182] Loaded profile config "default-k8s-diff-port-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:50.153985   10414 config.go:182] Loaded profile config "multinode-328000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:50.154034   10414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:10:50.157400   10414 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:10:50.164436   10414 start.go:297] selected driver: qemu2
	I1007 05:10:50.164442   10414 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:10:50.164447   10414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:10:50.166925   10414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1007 05:10:50.166965   10414 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1007 05:10:50.174419   10414 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:10:50.177540   10414 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 05:10:50.177560   10414 cni.go:84] Creating CNI manager for ""
	I1007 05:10:50.177589   10414 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:10:50.177593   10414 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:10:50.177622   10414 start.go:340] cluster config:
	{Name:newest-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:10:50.182396   10414 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:10:50.190438   10414 out.go:177] * Starting "newest-cni-458000" primary control-plane node in "newest-cni-458000" cluster
	I1007 05:10:50.194436   10414 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:10:50.194454   10414 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:10:50.194465   10414 cache.go:56] Caching tarball of preloaded images
	I1007 05:10:50.194584   10414 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:10:50.194605   10414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:10:50.194679   10414 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/newest-cni-458000/config.json ...
	I1007 05:10:50.194696   10414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/newest-cni-458000/config.json: {Name:mkbb92768713f63dc80c97a2c668a8265091c83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:10:50.195075   10414 start.go:360] acquireMachinesLock for newest-cni-458000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:50.195130   10414 start.go:364] duration metric: took 48.084µs to acquireMachinesLock for "newest-cni-458000"
	I1007 05:10:50.195145   10414 start.go:93] Provisioning new machine with config: &{Name:newest-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:50.195201   10414 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:50.203477   10414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:50.221483   10414 start.go:159] libmachine.API.Create for "newest-cni-458000" (driver="qemu2")
	I1007 05:10:50.221516   10414 client.go:168] LocalClient.Create starting
	I1007 05:10:50.221598   10414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:50.221639   10414 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:50.221653   10414 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:50.221703   10414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:50.221736   10414 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:50.221748   10414 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:50.222214   10414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:50.363263   10414 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:50.438644   10414 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:50.438653   10414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:50.438825   10414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:50.448720   10414 main.go:141] libmachine: STDOUT: 
	I1007 05:10:50.448738   10414 main.go:141] libmachine: STDERR: 
	I1007 05:10:50.448796   10414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2 +20000M
	I1007 05:10:50.457221   10414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:50.457237   10414 main.go:141] libmachine: STDERR: 
	I1007 05:10:50.457251   10414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:50.457259   10414 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:50.457273   10414 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:50.457307   10414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:b8:d6:f0:c5:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:50.459190   10414 main.go:141] libmachine: STDOUT: 
	I1007 05:10:50.459205   10414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:50.459228   10414 client.go:171] duration metric: took 237.705209ms to LocalClient.Create
	I1007 05:10:52.461395   10414 start.go:128] duration metric: took 2.266179583s to createHost
	I1007 05:10:52.461449   10414 start.go:83] releasing machines lock for "newest-cni-458000", held for 2.266316167s
	W1007 05:10:52.461495   10414 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:52.471704   10414 out.go:177] * Deleting "newest-cni-458000" in qemu2 ...
	W1007 05:10:52.494914   10414 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:52.494940   10414 start.go:729] Will try again in 5 seconds ...
	I1007 05:10:57.497003   10414 start.go:360] acquireMachinesLock for newest-cni-458000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:10:57.497594   10414 start.go:364] duration metric: took 474.833µs to acquireMachinesLock for "newest-cni-458000"
	I1007 05:10:57.497690   10414 start.go:93] Provisioning new machine with config: &{Name:newest-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:10:57.497948   10414 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:10:57.503190   10414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:10:57.551814   10414 start.go:159] libmachine.API.Create for "newest-cni-458000" (driver="qemu2")
	I1007 05:10:57.551879   10414 client.go:168] LocalClient.Create starting
	I1007 05:10:57.552020   10414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/ca.pem
	I1007 05:10:57.552100   10414 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:57.552119   10414 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:57.552199   10414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19763-6232/.minikube/certs/cert.pem
	I1007 05:10:57.552256   10414 main.go:141] libmachine: Decoding PEM data...
	I1007 05:10:57.552270   10414 main.go:141] libmachine: Parsing certificate...
	I1007 05:10:57.552887   10414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:10:57.708074   10414 main.go:141] libmachine: Creating SSH key...
	I1007 05:10:57.770233   10414 main.go:141] libmachine: Creating Disk image...
	I1007 05:10:57.770239   10414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:10:57.770414   10414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:57.780223   10414 main.go:141] libmachine: STDOUT: 
	I1007 05:10:57.780255   10414 main.go:141] libmachine: STDERR: 
	I1007 05:10:57.780313   10414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2 +20000M
	I1007 05:10:57.788777   10414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:10:57.788805   10414 main.go:141] libmachine: STDERR: 
	I1007 05:10:57.788823   10414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:57.788828   10414 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:10:57.788837   10414 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:10:57.788868   10414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:e5:57:1b:6c:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:10:57.790697   10414 main.go:141] libmachine: STDOUT: 
	I1007 05:10:57.790717   10414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:10:57.790730   10414 client.go:171] duration metric: took 238.844958ms to LocalClient.Create
	I1007 05:10:59.792900   10414 start.go:128] duration metric: took 2.294929708s to createHost
	I1007 05:10:59.793009   10414 start.go:83] releasing machines lock for "newest-cni-458000", held for 2.295366334s
	W1007 05:10:59.793371   10414 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:10:59.805967   10414 out.go:201] 
	W1007 05:10:59.810105   10414 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:10:59.810160   10414 out.go:270] * 
	* 
	W1007 05:10:59.812897   10414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:10:59.820646   10414 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (73.577709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-458000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-717000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (35.367292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-717000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.58275ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.184917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-717000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (32.783958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-717000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-717000 --alsologtostderr -v=1: exit status 83 (45.854875ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-717000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-717000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:10:53.086040   10436 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:10:53.086228   10436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:53.086232   10436 out.go:358] Setting ErrFile to fd 2...
	I1007 05:10:53.086235   10436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:10:53.086380   10436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:10:53.086587   10436 out.go:352] Setting JSON to false
	I1007 05:10:53.086595   10436 mustload.go:65] Loading cluster: default-k8s-diff-port-717000
	I1007 05:10:53.086826   10436 config.go:182] Loaded profile config "default-k8s-diff-port-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:10:53.091822   10436 out.go:177] * The control-plane node default-k8s-diff-port-717000 host is not running: state=Stopped
	I1007 05:10:53.095773   10436 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-717000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-717000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.2565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (33.077792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.192342292s)

                                                
                                                
-- stdout --
	* [newest-cni-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-458000" primary control-plane node in "newest-cni-458000" cluster
	* Restarting existing qemu2 VM for "newest-cni-458000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-458000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:11:03.260391   10486 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:11:03.260520   10486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:11:03.260524   10486 out.go:358] Setting ErrFile to fd 2...
	I1007 05:11:03.260526   10486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:11:03.260649   10486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:11:03.261676   10486 out.go:352] Setting JSON to false
	I1007 05:11:03.279236   10486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6034,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:11:03.279304   10486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:11:03.284151   10486 out.go:177] * [newest-cni-458000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:11:03.291093   10486 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 05:11:03.291164   10486 notify.go:220] Checking for updates...
	I1007 05:11:03.298160   10486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 05:11:03.301032   10486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:11:03.304036   10486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:11:03.307129   10486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 05:11:03.310097   10486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:11:03.313474   10486 config.go:182] Loaded profile config "newest-cni-458000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:11:03.313734   10486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:11:03.318097   10486 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:11:03.325097   10486 start.go:297] selected driver: qemu2
	I1007 05:11:03.325103   10486 start.go:901] validating driver "qemu2" against &{Name:newest-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:newest-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:11:03.325169   10486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:11:03.327655   10486 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 05:11:03.327681   10486 cni.go:84] Creating CNI manager for ""
	I1007 05:11:03.327704   10486 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:11:03.327729   10486 start.go:340] cluster config:
	{Name:newest-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-458000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:11:03.332244   10486 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:11:03.341100   10486 out.go:177] * Starting "newest-cni-458000" primary control-plane node in "newest-cni-458000" cluster
	I1007 05:11:03.344211   10486 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:11:03.344246   10486 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:11:03.344254   10486 cache.go:56] Caching tarball of preloaded images
	I1007 05:11:03.344336   10486 preload.go:172] Found /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:11:03.344342   10486 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:11:03.344400   10486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/newest-cni-458000/config.json ...
	I1007 05:11:03.344865   10486 start.go:360] acquireMachinesLock for newest-cni-458000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:11:03.344896   10486 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "newest-cni-458000"
	I1007 05:11:03.344906   10486 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:11:03.344911   10486 fix.go:54] fixHost starting: 
	I1007 05:11:03.345042   10486 fix.go:112] recreateIfNeeded on newest-cni-458000: state=Stopped err=<nil>
	W1007 05:11:03.345051   10486 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:11:03.349094   10486 out.go:177] * Restarting existing qemu2 VM for "newest-cni-458000" ...
	I1007 05:11:03.356052   10486 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:11:03.356100   10486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:e5:57:1b:6c:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:11:03.358358   10486 main.go:141] libmachine: STDOUT: 
	I1007 05:11:03.358394   10486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:11:03.358425   10486 fix.go:56] duration metric: took 13.512ms for fixHost
	I1007 05:11:03.358430   10486 start.go:83] releasing machines lock for "newest-cni-458000", held for 13.528917ms
	W1007 05:11:03.358437   10486 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:11:03.358486   10486 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:11:03.358490   10486 start.go:729] Will try again in 5 seconds ...
	I1007 05:11:08.360734   10486 start.go:360] acquireMachinesLock for newest-cni-458000: {Name:mk797c306c4f0a3b80232af95904e28e8e2ec72b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:11:08.361233   10486 start.go:364] duration metric: took 400.084µs to acquireMachinesLock for "newest-cni-458000"
	I1007 05:11:08.361380   10486 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:11:08.361401   10486 fix.go:54] fixHost starting: 
	I1007 05:11:08.362131   10486 fix.go:112] recreateIfNeeded on newest-cni-458000: state=Stopped err=<nil>
	W1007 05:11:08.362157   10486 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:11:08.370513   10486 out.go:177] * Restarting existing qemu2 VM for "newest-cni-458000" ...
	I1007 05:11:08.373454   10486 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:11:08.373675   10486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:e5:57:1b:6c:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19763-6232/.minikube/machines/newest-cni-458000/disk.qcow2
	I1007 05:11:08.384482   10486 main.go:141] libmachine: STDOUT: 
	I1007 05:11:08.384532   10486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:11:08.384597   10486 fix.go:56] duration metric: took 23.198166ms for fixHost
	I1007 05:11:08.384612   10486 start.go:83] releasing machines lock for "newest-cni-458000", held for 23.356708ms
	W1007 05:11:08.384774   10486 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-458000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-458000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:11:08.392503   10486 out.go:201] 
	W1007 05:11:08.396497   10486 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:11:08.396515   10486 out.go:270] * 
	* 
	W1007 05:11:08.398743   10486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:11:08.406499   10486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-458000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (74.591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-458000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-458000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (34.557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-458000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-458000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-458000 --alsologtostderr -v=1: exit status 83 (45.955334ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-458000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-458000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:11:08.610817   10500 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:11:08.611019   10500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:11:08.611022   10500 out.go:358] Setting ErrFile to fd 2...
	I1007 05:11:08.611024   10500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:11:08.611156   10500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 05:11:08.611379   10500 out.go:352] Setting JSON to false
	I1007 05:11:08.611387   10500 mustload.go:65] Loading cluster: newest-cni-458000
	I1007 05:11:08.611617   10500 config.go:182] Loaded profile config "newest-cni-458000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:11:08.614727   10500 out.go:177] * The control-plane node newest-cni-458000 host is not running: state=Stopped
	I1007 05:11:08.618640   10500 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-458000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-458000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (35.160833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-458000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (35.037291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-458000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 18.55
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.86
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 11.09
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.92
55 TestFunctional/serial/CacheCmd/cache/add_local 1.05
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 1.36
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.68
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.14
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.22
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 5.05
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.55
258 TestNoKubernetes/serial/Stop 3.81
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
277 TestStartStop/group/old-k8s-version/serial/Stop 3.3
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
282 TestStartStop/group/no-preload/serial/Stop 2.09
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/embed-certs/serial/Stop 3.66
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.85
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.12
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 04:44:12.198062    6750 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1007 04:44:12.198415    6750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-915000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-915000: exit status 85 (101.951791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:43 PDT |          |
	|         | -p download-only-915000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 04:43:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 04:43:29.857173    6751 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:43:29.857351    6751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:43:29.857354    6751 out.go:358] Setting ErrFile to fd 2...
	I1007 04:43:29.857356    6751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:43:29.857490    6751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	W1007 04:43:29.857618    6751 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19763-6232/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19763-6232/.minikube/config/config.json: no such file or directory
	I1007 04:43:29.859038    6751 out.go:352] Setting JSON to true
	I1007 04:43:29.877048    6751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4380,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:43:29.877127    6751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:43:29.882920    6751 out.go:97] [download-only-915000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:43:29.883056    6751 notify.go:220] Checking for updates...
	W1007 04:43:29.883089    6751 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 04:43:29.885918    6751 out.go:169] MINIKUBE_LOCATION=19763
	I1007 04:43:29.889010    6751 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:43:29.893954    6751 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:43:29.896914    6751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:43:29.899961    6751 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	W1007 04:43:29.905922    6751 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 04:43:29.906164    6751 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:43:29.908903    6751 out.go:97] Using the qemu2 driver based on user configuration
	I1007 04:43:29.908924    6751 start.go:297] selected driver: qemu2
	I1007 04:43:29.908940    6751 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:43:29.909031    6751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:43:29.911914    6751 out.go:169] Automatically selected the socket_vmnet network
	I1007 04:43:29.917416    6751 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 04:43:29.917514    6751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:43:29.917549    6751 cni.go:84] Creating CNI manager for ""
	I1007 04:43:29.917580    6751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 04:43:29.917630    6751 start.go:340] cluster config:
	{Name:download-only-915000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-915000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:43:29.922295    6751 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:43:29.925792    6751 out.go:97] Downloading VM boot image ...
	I1007 04:43:29.925821    6751 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1007 04:43:47.639975    6751 out.go:97] Starting "download-only-915000" primary control-plane node in "download-only-915000" cluster
	I1007 04:43:47.639994    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:43:48.361197    6751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 04:43:48.361236    6751 cache.go:56] Caching tarball of preloaded images
	I1007 04:43:48.362146    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:43:48.367145    6751 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 04:43:48.367167    6751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:43:49.512374    6751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 04:44:10.884232    6751 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:44:10.884402    6751 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:44:11.579449    6751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 04:44:11.579648    6751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/download-only-915000/config.json ...
	I1007 04:44:11.579665    6751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19763-6232/.minikube/profiles/download-only-915000/config.json: {Name:mkb3cda34e00aed3e3b45773ad5a451249c45514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 04:44:11.579918    6751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 04:44:11.580160    6751 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1007 04:44:12.150034    6751 out.go:193] 
	W1007 04:44:12.154106    6751 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19763-6232/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60 0x109694f60] Decompressors:map[bz2:0x1400000fca0 gz:0x1400000fca8 tar:0x1400000fc50 tar.bz2:0x1400000fc60 tar.gz:0x1400000fc70 tar.xz:0x1400000fc80 tar.zst:0x1400000fc90 tbz2:0x1400000fc60 tgz:0x1400000fc70 txz:0x1400000fc80 tzst:0x1400000fc90 xz:0x1400000fcb0 zip:0x1400000fcd0 zst:0x1400000fcb8] Getters:map[file:0x140005b2b30 http:0x140008c00a0 https:0x140008c00f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1007 04:44:12.154128    6751 out_reason.go:110] 
	W1007 04:44:12.161103    6751 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 04:44:12.165068    6751 out.go:193] 
	
	
	* The control-plane node download-only-915000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-915000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-915000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-501000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-501000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (18.546395208s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 04:44:31.131073    6750 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 04:44:31.131126    6750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-501000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-501000: exit status 85 (84.006125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:43 PDT |                     |
	|         | -p download-only-915000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| delete  | -p download-only-915000        | download-only-915000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT | 07 Oct 24 04:44 PDT |
	| start   | -o=json --download-only        | download-only-501000 | jenkins | v1.34.0 | 07 Oct 24 04:44 PDT |                     |
	|         | -p download-only-501000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 04:44:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 04:44:12.616916    6775 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:44:12.617067    6775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:44:12.617071    6775 out.go:358] Setting ErrFile to fd 2...
	I1007 04:44:12.617073    6775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:44:12.617185    6775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:44:12.618300    6775 out.go:352] Setting JSON to true
	I1007 04:44:12.636204    6775 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4423,"bootTime":1728297029,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:44:12.636274    6775 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:44:12.639558    6775 out.go:97] [download-only-501000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:44:12.639662    6775 notify.go:220] Checking for updates...
	I1007 04:44:12.643531    6775 out.go:169] MINIKUBE_LOCATION=19763
	I1007 04:44:12.646602    6775 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:44:12.650522    6775 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:44:12.653520    6775 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:44:12.656560    6775 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	W1007 04:44:12.662518    6775 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 04:44:12.662722    6775 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:44:12.665543    6775 out.go:97] Using the qemu2 driver based on user configuration
	I1007 04:44:12.665552    6775 start.go:297] selected driver: qemu2
	I1007 04:44:12.665556    6775 start.go:901] validating driver "qemu2" against <nil>
	I1007 04:44:12.665605    6775 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 04:44:12.668496    6775 out.go:169] Automatically selected the socket_vmnet network
	I1007 04:44:12.673819    6775 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 04:44:12.673966    6775 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 04:44:12.673984    6775 cni.go:84] Creating CNI manager for ""
	I1007 04:44:12.674006    6775 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 04:44:12.674011    6775 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 04:44:12.674056    6775 start.go:340] cluster config:
	{Name:download-only-501000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:44:12.678284    6775 iso.go:125] acquiring lock: {Name:mk3db85f54a6554c710a2cbe833c7d87e4bfaf4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 04:44:12.681541    6775 out.go:97] Starting "download-only-501000" primary control-plane node in "download-only-501000" cluster
	I1007 04:44:12.681549    6775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:44:13.766857    6775 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 04:44:13.766986    6775 cache.go:56] Caching tarball of preloaded images
	I1007 04:44:13.768082    6775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 04:44:13.773555    6775 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 04:44:13.773581    6775 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1007 04:44:14.330614    6775 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19763-6232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-501000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-501000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-501000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 04:44:31.658365    6750 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-828000 --alsologtostderr --binary-mirror http://127.0.0.1:51043 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-828000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-193000
addons_test.go:934: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-193000: exit status 85 (65.185334ms)

                                                
                                                
-- stdout --
	* Profile "addons-193000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-193000
addons_test.go:945: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-193000: exit status 85 (61.529083ms)

                                                
                                                
-- stdout --
	* Profile "addons-193000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-193000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.86s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1007 04:56:17.248031    6750 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 04:56:17.248174    6750 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1007 04:56:19.240513    6750 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1007 04:56:19.240715    6750 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1007 04:56:19.240762    6750 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit
I1007 04:56:19.771782    6750 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380 0x107b4e380] Decompressors:map[bz2:0x1400046e230 gz:0x1400046e238 tar:0x1400046e1e0 tar.bz2:0x1400046e1f0 tar.gz:0x1400046e200 tar.xz:0x1400046e210 tar.zst:0x1400046e220 tbz2:0x1400046e1f0 tgz:0x1400046e200 txz:0x1400046e210 tzst:0x1400046e220 xz:0x1400046e240 zip:0x1400046e250 zst:0x1400046e248] Getters:map[file:0x1400084ea20 http:0x14000c30fa0 https:0x14000c30ff0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1007 04:56:19.771924    6750 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate929997505/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status: exit status 7 (36.179083ms)

                                                
                                                
-- stdout --
	nospam-744000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status: exit status 7 (34.860125ms)

                                                
                                                
-- stdout --
	nospam-744000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status: exit status 7 (35.0395ms)

                                                
                                                
-- stdout --
	nospam-744000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause: exit status 83 (45.064334ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause: exit status 83 (44.917333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause: exit status 83 (44.674041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause: exit status 83 (42.8125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause: exit status 83 (43.8935ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause: exit status 83 (43.733625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-744000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-744000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (11.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop: (3.457224458s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop: (4.025432208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-744000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-744000 stop: (3.603308083s)
--- PASS: TestErrorSpam/stop (11.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19763-6232/.minikube/files/etc/test/nested/copy/6750/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local238016666/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache add minikube-local-cache-test:functional-418000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 cache delete minikube-local-cache-test:functional-418000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-418000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 config get cpus: exit status 14 (34.763792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 config get cpus: exit status 14 (35.339625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-418000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (166.548875ms)

                                                
                                                
-- stdout --
	* [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:46:13.632062    7359 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:46:13.632230    7359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:13.632234    7359 out.go:358] Setting ErrFile to fd 2...
	I1007 04:46:13.632237    7359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:13.632408    7359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:46:13.633702    7359 out.go:352] Setting JSON to false
	I1007 04:46:13.653575    7359 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4544,"bootTime":1728297029,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:46:13.653639    7359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:46:13.658713    7359 out.go:177] * [functional-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 04:46:13.665691    7359 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:46:13.665750    7359 notify.go:220] Checking for updates...
	I1007 04:46:13.672682    7359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:46:13.675648    7359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:46:13.678672    7359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:46:13.681654    7359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:46:13.684674    7359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:46:13.687929    7359 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:46:13.688226    7359 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:46:13.692642    7359 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 04:46:13.699595    7359 start.go:297] selected driver: qemu2
	I1007 04:46:13.699601    7359 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:46:13.699670    7359 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:46:13.706664    7359 out.go:201] 
	W1007 04:46:13.709601    7359 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 04:46:13.713599    7359 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-418000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-418000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.956375ms)

                                                
                                                
-- stdout --
	* [functional-418000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 04:46:13.869235    7370 out.go:345] Setting OutFile to fd 1 ...
	I1007 04:46:13.869377    7370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:13.869381    7370 out.go:358] Setting ErrFile to fd 2...
	I1007 04:46:13.869384    7370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 04:46:13.869527    7370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19763-6232/.minikube/bin
	I1007 04:46:13.871013    7370 out.go:352] Setting JSON to false
	I1007 04:46:13.889458    7370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4544,"bootTime":1728297029,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 04:46:13.889545    7370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 04:46:13.894655    7370 out.go:177] * [functional-418000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1007 04:46:13.902608    7370 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 04:46:13.902643    7370 notify.go:220] Checking for updates...
	I1007 04:46:13.909687    7370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	I1007 04:46:13.912648    7370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 04:46:13.915650    7370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 04:46:13.918663    7370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	I1007 04:46:13.921627    7370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 04:46:13.924890    7370 config.go:182] Loaded profile config "functional-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 04:46:13.925172    7370 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 04:46:13.929653    7370 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1007 04:46:13.936623    7370 start.go:297] selected driver: qemu2
	I1007 04:46:13.936628    7370 start.go:901] validating driver "qemu2" against &{Name:functional-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 04:46:13.936698    7370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 04:46:13.943688    7370 out.go:201] 
	W1007 04:46:13.947620    7370 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 04:46:13.951618    7370 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.359861792s)
--- PASS: TestFunctional/parallel/License (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.667053084s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-418000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image rm kicbase/echo-server:functional-418000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-418000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 image save --daemon kicbase/echo-server:functional-418000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-418000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.6655ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
I1007 04:45:34.994310    6750 retry.go:31] will retry after 3.768684331s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1329: Took "38.9385ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "56.126792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.59925ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.015122s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-418000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-418000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-418000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-418000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-439000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-439000 --output=json --user=testUser: (2.14006925s)
--- PASS: TestJSONOutput/stop/Command (2.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-079000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-079000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (105.944833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5dbcd2b0-e6de-47c7-8d1a-00e8315feae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-079000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ccb46cf-0ffd-420f-b286-a227d0150e54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"0e539f1a-c2c6-4e4f-b0c2-07e8f2afb461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig"}}
	{"specversion":"1.0","id":"a776305c-00cd-489e-aac1-823ffb2646ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a4cadb54-05aa-44f5-ab33-6e4154afee9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2410aac-c211-466e-a4ff-1689ab7ae89f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube"}}
	{"specversion":"1.0","id":"607f0bd4-d2d3-4ac4-9567-092619059c58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0be77853-dec3-43eb-b29e-223f83c42b2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-079000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-602000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (111.835292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-602000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19763-6232/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19763-6232/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-602000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-602000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.793667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-602000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-602000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.764064291s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.79050525s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-602000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-602000: (3.810132917s)
--- PASS: TestNoKubernetes/serial/Stop (3.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-602000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-602000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.321458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-602000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-602000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-013000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-537000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-537000 --alsologtostderr -v=3: (3.301115666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (63.964125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-537000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-599000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-599000 --alsologtostderr -v=3: (2.085355083s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-599000 -n no-preload-599000: exit status 7 (57.938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-599000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-430000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-430000 --alsologtostderr -v=3: (3.658995833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-717000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-717000 --alsologtostderr -v=3: (3.851465209s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-430000 -n embed-certs-430000: exit status 7 (59.190541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-430000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-717000 -n default-k8s-diff-port-717000: exit status 7 (61.841875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-717000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-458000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-458000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-458000 --alsologtostderr -v=3: (3.121598208s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-458000 -n newest-cni-458000: exit status 7 (62.952625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-458000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2172990610/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728301535096515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2172990610/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728301535096515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2172990610/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728301535096515000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2172990610/001/test-1728301535096515000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.483208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:35.158542    6750 retry.go:31] will retry after 552.36489ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.140875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:35.801158    6750 retry.go:31] will retry after 878.849961ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.749875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:36.772055    6750 retry.go:31] will retry after 1.533623423s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.068166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:38.401216    6750 retry.go:31] will retry after 2.253410408s: exit status 83
I1007 04:45:38.765278    6750 retry.go:31] will retry after 7.584964141s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.884792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:40.749091    6750 retry.go:31] will retry after 2.870027466s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.225875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:43.713774    6750 retry.go:31] will retry after 2.750772474s: exit status 83
I1007 04:45:46.352619    6750 retry.go:31] will retry after 7.481151776s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.225709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo umount -f /mount-9p": exit status 83 (48.913875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2172990610/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3375522170/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (66.53425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:46.795751    6750 retry.go:31] will retry after 390.758884ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.011209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:47.280967    6750 retry.go:31] will retry after 1.047696924s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.94375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:48.422051    6750 retry.go:31] will retry after 1.109947418s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.9795ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:49.625296    6750 retry.go:31] will retry after 1.346993654s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.77025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:51.065596    6750 retry.go:31] will retry after 3.460918599s: exit status 83
I1007 04:45:53.836131    6750 retry.go:31] will retry after 8.536470842s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.925375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:54.618982    6750 retry.go:31] will retry after 3.20084998s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.008583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "sudo umount -f /mount-9p": exit status 83 (47.784583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-418000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3375522170/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (84.737375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:58.169234    6750 retry.go:31] will retry after 578.211704ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (88.455ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:58.838300    6750 retry.go:31] will retry after 484.612951ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (91.05575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:45:59.416448    6750 retry.go:31] will retry after 856.684573ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (91.57825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:46:00.367125    6750 retry.go:31] will retry after 2.055661608s: exit status 83
I1007 04:46:02.374947    6750 retry.go:31] will retry after 24.420600095s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (90.075792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:46:02.515186    6750 retry.go:31] will retry after 2.628001456s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (92.420459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:46:05.238063    6750 retry.go:31] will retry after 4.59678549s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (91.603041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
I1007 04:46:09.928847    6750 retry.go:31] will retry after 3.146121468s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-418000 ssh "findmnt -T" /mount1: exit status 83 (92.311792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-418000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-418000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-418000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3789467296/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.48s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-842000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-842000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-842000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-842000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842000"

                                                
                                                
----------------------- debugLogs end: cilium-842000 [took: 2.377471833s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-842000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-752000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard