Test Report: QEMU_macOS 19883

                    
                      121f0c56d9928f50a4014e71c8f2076bb23ebfa1:2024-10-30:36875
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.68
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.28
22 TestOffline 9.99
27 TestAddons/Setup 10.02
28 TestCertOptions 10.36
29 TestCertExpiration 195.41
30 TestDockerFlags 10.1
31 TestForceSystemdFlag 10.31
32 TestForceSystemdEnv 10.25
38 TestErrorSpam/setup 9.8
47 TestFunctional/serial/StartWithProxy 10.07
49 TestFunctional/serial/SoftStart 5.29
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 1.86
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.2
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.19
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.31
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 97.15
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.05
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
104 TestFunctional/parallel/ServiceCmd/Format 0.05
105 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/Version/components 0.05
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.93
142 TestMultiControlPlane/serial/DeployApp 113.62
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 59.02
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.13
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.36
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.94
165 TestJSONOutput/start/Command 9.9
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.11
197 TestMountStart/serial/StartWithMountFirst 10.12
200 TestMultiNode/serial/FreshStart2Nodes 9.99
201 TestMultiNode/serial/DeployApp2Nodes 89.08
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 51.36
209 TestMultiNode/serial/RestartKeepsNodes 8.23
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 2.17
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.29
217 TestPreload 10.12
219 TestScheduledStopUnix 10.12
220 TestSkaffold 12.35
223 TestRunningBinaryUpgrade 596.92
225 TestKubernetesUpgrade 17.51
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.95
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.01
241 TestStoppedBinaryUpgrade/Upgrade 690.98
243 TestPause/serial/Start 10
253 TestNoKubernetes/serial/StartWithK8s 9.98
254 TestNoKubernetes/serial/StartWithStopK8s 5.32
255 TestNoKubernetes/serial/Start 5.32
259 TestNoKubernetes/serial/StartNoArgs 5.38
261 TestNetworkPlugins/group/auto/Start 9.7
262 TestNetworkPlugins/group/kindnet/Start 10
263 TestNetworkPlugins/group/calico/Start 9.94
264 TestNetworkPlugins/group/custom-flannel/Start 9.89
265 TestNetworkPlugins/group/false/Start 9.78
266 TestNetworkPlugins/group/enable-default-cni/Start 9.79
267 TestNetworkPlugins/group/flannel/Start 9.88
268 TestNetworkPlugins/group/bridge/Start 9.75
269 TestNetworkPlugins/group/kubenet/Start 9.81
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10.2
272 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
276 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
280 TestStartStop/group/old-k8s-version/serial/Pause 0.12
282 TestStartStop/group/no-preload/serial/FirstStart 9.92
283 TestStartStop/group/no-preload/serial/DeployApp 0.1
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
287 TestStartStop/group/no-preload/serial/SecondStart 5.26
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
291 TestStartStop/group/no-preload/serial/Pause 0.11
293 TestStartStop/group/embed-certs/serial/FirstStart 10.04
294 TestStartStop/group/embed-certs/serial/DeployApp 0.1
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
298 TestStartStop/group/embed-certs/serial/SecondStart 5.26
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
302 TestStartStop/group/embed-certs/serial/Pause 0.11
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.9
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
313 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
315 TestStartStop/group/newest-cni/serial/FirstStart 9.9
320 TestStartStop/group/newest-cni/serial/SecondStart 5.27
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (25.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (25.67834475s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c85c68a-d0b8-4bb6-b27e-0d554ad65de2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"134bb31f-3849-4983-b237-259487aa2ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19883"}}
	{"specversion":"1.0","id":"2ab676e1-04e9-4a12-b1c3-899188729dfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig"}}
	{"specversion":"1.0","id":"69d5b3de-e73e-4038-b295-837e862f39eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a7cd7f88-aa7c-4be4-912f-ab9756a177ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bcd3e58-6ee9-4439-84a2-9cbf785aeebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube"}}
	{"specversion":"1.0","id":"e4469dd4-57c4-40e3-8dc6-f498cf96a855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"3bcc8150-1fbd-4546-9641-ed9932aa6555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e8e3f30-c5c5-4588-bb39-9696d90ef0b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dfbb38d8-e152-4a7b-a119-75056af0de43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"971467ae-da0a-4207-ba99-040362797799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-089000\" primary control-plane node in \"download-only-089000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a30ae02-b7a7-4b91-8223-497e39d5e7ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3e27c4d-5d08-4697-a089-a68b4340eb4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340] Decompressors:map[bz2:0x140003eed80 gz:0x140003eed88 tar:0x140003eed30 tar.bz2:0x140003eed40 tar.gz:0x140003eed50 tar.xz:0x140003eed60 tar.zst:0x140003eed70 tbz2:0x140003eed40 tgz:0x1
40003eed50 txz:0x140003eed60 tzst:0x140003eed70 xz:0x140003eed90 zip:0x140003eeda0 zst:0x140003eed98] Getters:map[file:0x14000a2e680 http:0x14000048690 https:0x140000486e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"0b3f8418-3316-44e7-8933-b4b7579fecb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:16:40.901220   12044 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:16:40.901402   12044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:16:40.901405   12044 out.go:358] Setting ErrFile to fd 2...
	I1030 11:16:40.901407   12044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:16:40.901540   12044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	W1030 11:16:40.901635   12044 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19883-11536/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19883-11536/.minikube/config/config.json: no such file or directory
	I1030 11:16:40.903121   12044 out.go:352] Setting JSON to true
	I1030 11:16:40.921115   12044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6371,"bootTime":1730305829,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:16:40.921184   12044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:16:40.927053   12044 out.go:97] [download-only-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:16:40.927173   12044 notify.go:220] Checking for updates...
	W1030 11:16:40.927244   12044 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball: no such file or directory
	I1030 11:16:40.929987   12044 out.go:169] MINIKUBE_LOCATION=19883
	I1030 11:16:40.933036   12044 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:16:40.938026   12044 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:16:40.940958   12044 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:16:40.944997   12044 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	W1030 11:16:40.950894   12044 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 11:16:40.951176   12044 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:16:40.953925   12044 out.go:97] Using the qemu2 driver based on user configuration
	I1030 11:16:40.953942   12044 start.go:297] selected driver: qemu2
	I1030 11:16:40.953957   12044 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:16:40.954014   12044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:16:40.956926   12044 out.go:169] Automatically selected the socket_vmnet network
	I1030 11:16:40.962508   12044 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1030 11:16:40.962633   12044 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:16:40.962681   12044 cni.go:84] Creating CNI manager for ""
	I1030 11:16:40.962734   12044 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1030 11:16:40.962790   12044 start.go:340] cluster config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:16:40.967538   12044 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:16:40.972003   12044 out.go:97] Downloading VM boot image ...
	I1030 11:16:40.972019   12044 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso
	I1030 11:16:53.467524   12044 out.go:97] Starting "download-only-089000" primary control-plane node in "download-only-089000" cluster
	I1030 11:16:53.467567   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:16:53.526800   12044 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:16:53.526825   12044 cache.go:56] Caching tarball of preloaded images
	I1030 11:16:53.527013   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:16:53.533134   12044 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1030 11:16:53.533141   12044 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:16:53.614060   12044 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:17:05.275946   12044 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:17:05.276112   12044 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:17:05.970003   12044 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1030 11:17:05.970213   12044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/download-only-089000/config.json ...
	I1030 11:17:05.970229   12044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/download-only-089000/config.json: {Name:mk7fc06580051dfc989c2e90aefb7130eeed8b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:17:05.970534   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:17:05.970784   12044 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1030 11:17:06.499110   12044 out.go:193] 
	W1030 11:17:06.501928   12044 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340] Decompressors:map[bz2:0x140003eed80 gz:0x140003eed88 tar:0x140003eed30 tar.bz2:0x140003eed40 tar.gz:0x140003eed50 tar.xz:0x140003eed60 tar.zst:0x140003eed70 tbz2:0x140003eed40 tgz:0x140003eed50 txz:0x140003eed60 tzst:0x140003eed70 xz:0x140003eed90 zip:0x140003eeda0 zst:0x140003eed98] Getters:map[file:0x14000a2e680 http:0x14000048690 https:0x140000486e0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1030 11:17:06.501956   12044 out_reason.go:110] 
	W1030 11:17:06.509168   12044 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:17:06.512066   12044 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-089000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (25.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
I1030 11:17:16.480794   12043 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-200000 --alsologtostderr --binary-mirror http://127.0.0.1:56977 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-200000 --alsologtostderr --binary-mirror http://127.0.0.1:56977 --driver=qemu2 : exit status 40 (169.347166ms)

                                                
                                                
-- stdout --
	* [binary-mirror-200000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-200000" primary control-plane node in "binary-mirror-200000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:17:16.543935   12104 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:17:16.544086   12104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:16.544089   12104 out.go:358] Setting ErrFile to fd 2...
	I1030 11:17:16.544092   12104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:16.544223   12104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:17:16.545310   12104 out.go:352] Setting JSON to false
	I1030 11:17:16.563073   12104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6407,"bootTime":1730305829,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:17:16.563149   12104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:17:16.567246   12104 out.go:177] * [binary-mirror-200000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:17:16.574279   12104 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:17:16.574378   12104 notify.go:220] Checking for updates...
	I1030 11:17:16.582170   12104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:17:16.585258   12104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:17:16.588189   12104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:17:16.591214   12104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:17:16.594422   12104 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:17:16.598267   12104 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:17:16.605245   12104 start.go:297] selected driver: qemu2
	I1030 11:17:16.605253   12104 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:17:16.605313   12104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:17:16.608197   12104 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:17:16.614648   12104 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1030 11:17:16.614746   12104 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:17:16.614769   12104 cni.go:84] Creating CNI manager for ""
	I1030 11:17:16.614795   12104 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:17:16.614802   12104 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:17:16.614847   12104 start.go:340] cluster config:
	{Name:binary-mirror-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:binary-mirror-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:56977 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:17:16.619311   12104 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:17:16.626196   12104 out.go:177] * Starting "binary-mirror-200000" primary control-plane node in "binary-mirror-200000" cluster
	I1030 11:17:16.630207   12104 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:16.630221   12104 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:17:16.630230   12104 cache.go:56] Caching tarball of preloaded images
	I1030 11:17:16.630315   12104 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:17:16.630320   12104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:17:16.630532   12104 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/binary-mirror-200000/config.json ...
	I1030 11:17:16.630544   12104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/binary-mirror-200000/config.json: {Name:mk3df979780fda8c03b4d159bb04a5c43849e3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:17:16.630783   12104 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:16.630845   12104 download.go:107] Downloading: http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	I1030 11:17:16.656263   12104 out.go:201] 
	W1030 11:17:16.660234   12104 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.31.2/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340] Decompressors:map[bz2:0x14000697470 gz:0x14000697478 tar:0x14000697410 tar.bz2:0x14000697420 tar.gz:0x14000697430 tar.xz:0x14000697440 tar.zst:0x14000697450 tbz2:0x14000697420 tgz:0x14000697430 txz:0x14000697440 tzst:0x14000697450 xz:0x14000697490 zip:0x140006974a0 zst:0x14000697498] Getters:map[file:0x14000173120 http:0x140000db040 https:0x140000db090] Dir
:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:56977/v1.31.2/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.31.2/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340 0x109be9340] Decompressors:map[bz2:0x14000697470 gz:0x14000697478 tar:0x14000697410 tar.bz2:0x14000697420 tar.gz:0x14000697430 tar.xz:0x14000697440 tar.zst:0x14000697450 tbz2:0x14000697420 tgz:0x14000697430 txz:0x14000697440 tzst:0x14000697450 xz:0x14000697490 zip:0x140006974a0 zst:0x14000697498] Getters:map[file:0x14000173120 http:0x140000db040 https:0x140000db090] Dir:false ProgressListener:<nil> Insecure:fal
se DisableSymlinks:false Options:[]}: unexpected EOF
	W1030 11:17:16.660240   12104 out.go:270] * 
	* 
	W1030 11:17:16.660661   12104 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:17:16.672104   12104 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-200000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:56977" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-200000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-200000
--- FAIL: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-775000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-775000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.8280085s)

                                                
                                                
-- stdout --
	* [offline-docker-775000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-775000" primary control-plane node in "offline-docker-775000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:28:31.557921   13393 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:28:31.558108   13393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:31.558111   13393 out.go:358] Setting ErrFile to fd 2...
	I1030 11:28:31.558114   13393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:31.558252   13393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:28:31.559593   13393 out.go:352] Setting JSON to false
	I1030 11:28:31.579087   13393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7082,"bootTime":1730305829,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:28:31.579207   13393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:28:31.584596   13393 out.go:177] * [offline-docker-775000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:28:31.592751   13393 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:28:31.592781   13393 notify.go:220] Checking for updates...
	I1030 11:28:31.600617   13393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:28:31.601797   13393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:28:31.604662   13393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:28:31.607660   13393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:28:31.610609   13393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:28:31.613996   13393 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:31.614053   13393 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:28:31.617583   13393 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:28:31.624628   13393 start.go:297] selected driver: qemu2
	I1030 11:28:31.624636   13393 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:28:31.624643   13393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:28:31.626880   13393 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:28:31.629588   13393 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:28:31.632704   13393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:28:31.632724   13393 cni.go:84] Creating CNI manager for ""
	I1030 11:28:31.632746   13393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:28:31.632750   13393 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:28:31.632786   13393 start.go:340] cluster config:
	{Name:offline-docker-775000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:28:31.637454   13393 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:28:31.645621   13393 out.go:177] * Starting "offline-docker-775000" primary control-plane node in "offline-docker-775000" cluster
	I1030 11:28:31.649601   13393 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:28:31.649634   13393 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:28:31.649644   13393 cache.go:56] Caching tarball of preloaded images
	I1030 11:28:31.649733   13393 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:28:31.649739   13393 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:28:31.649829   13393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/offline-docker-775000/config.json ...
	I1030 11:28:31.649840   13393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/offline-docker-775000/config.json: {Name:mke41f8f5605399957ac52a98ee30e78d444e58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:28:31.650184   13393 start.go:360] acquireMachinesLock for offline-docker-775000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:31.650242   13393 start.go:364] duration metric: took 47.75µs to acquireMachinesLock for "offline-docker-775000"
	I1030 11:28:31.650262   13393 start.go:93] Provisioning new machine with config: &{Name:offline-docker-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:31.650289   13393 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:31.654686   13393 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:31.670114   13393 start.go:159] libmachine.API.Create for "offline-docker-775000" (driver="qemu2")
	I1030 11:28:31.670155   13393 client.go:168] LocalClient.Create starting
	I1030 11:28:31.670243   13393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:31.670287   13393 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:31.670298   13393 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:31.670344   13393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:31.670379   13393 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:31.670387   13393 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:31.670854   13393 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:31.831314   13393 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:31.935529   13393 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:31.935538   13393 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:31.935734   13393 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:31.946292   13393 main.go:141] libmachine: STDOUT: 
	I1030 11:28:31.946328   13393 main.go:141] libmachine: STDERR: 
	I1030 11:28:31.946401   13393 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2 +20000M
	I1030 11:28:31.959016   13393 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:31.959031   13393 main.go:141] libmachine: STDERR: 
	I1030 11:28:31.959052   13393 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:31.959060   13393 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:31.959078   13393 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:31.959109   13393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b8:03:2a:b0:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:31.960995   13393 main.go:141] libmachine: STDOUT: 
	I1030 11:28:31.961009   13393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:31.961026   13393 client.go:171] duration metric: took 290.866333ms to LocalClient.Create
	I1030 11:28:33.961405   13393 start.go:128] duration metric: took 2.311136334s to createHost
	I1030 11:28:33.961422   13393 start.go:83] releasing machines lock for "offline-docker-775000", held for 2.311203167s
	W1030 11:28:33.961432   13393 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:33.968888   13393 out.go:177] * Deleting "offline-docker-775000" in qemu2 ...
	W1030 11:28:33.978528   13393 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:33.978537   13393 start.go:729] Will try again in 5 seconds ...
	I1030 11:28:38.980716   13393 start.go:360] acquireMachinesLock for offline-docker-775000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:38.981239   13393 start.go:364] duration metric: took 429.208µs to acquireMachinesLock for "offline-docker-775000"
	I1030 11:28:38.981372   13393 start.go:93] Provisioning new machine with config: &{Name:offline-docker-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:38.981609   13393 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:38.993397   13393 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:39.043119   13393 start.go:159] libmachine.API.Create for "offline-docker-775000" (driver="qemu2")
	I1030 11:28:39.043171   13393 client.go:168] LocalClient.Create starting
	I1030 11:28:39.043319   13393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:39.043405   13393 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:39.043422   13393 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:39.043495   13393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:39.043555   13393 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:39.043578   13393 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:39.044286   13393 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:39.216492   13393 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:39.279367   13393 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:39.279373   13393 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:39.279565   13393 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:39.289334   13393 main.go:141] libmachine: STDOUT: 
	I1030 11:28:39.289364   13393 main.go:141] libmachine: STDERR: 
	I1030 11:28:39.289424   13393 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2 +20000M
	I1030 11:28:39.297775   13393 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:39.297791   13393 main.go:141] libmachine: STDERR: 
	I1030 11:28:39.297807   13393 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:39.297813   13393 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:39.297821   13393 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:39.297856   13393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:78:fb:63:dd:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/offline-docker-775000/disk.qcow2
	I1030 11:28:39.299701   13393 main.go:141] libmachine: STDOUT: 
	I1030 11:28:39.299714   13393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:39.299725   13393 client.go:171] duration metric: took 256.55175ms to LocalClient.Create
	I1030 11:28:41.301873   13393 start.go:128] duration metric: took 2.320252166s to createHost
	I1030 11:28:41.301990   13393 start.go:83] releasing machines lock for "offline-docker-775000", held for 2.320716625s
	W1030 11:28:41.302293   13393 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:41.316018   13393 out.go:201] 
	W1030 11:28:41.320161   13393 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:28:41.320190   13393 out.go:270] * 
	* 
	W1030 11:28:41.323160   13393 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:28:41.337013   13393 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-775000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-30 11:28:41.353761 -0700 PDT m=+720.537590251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-775000 -n offline-docker-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-775000 -n offline-docker-775000: exit status 7 (72.4455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-775000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-775000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/Setup (10.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-644000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-644000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.020731333s)

                                                
                                                
-- stdout --
	* [addons-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-644000" primary control-plane node in "addons-644000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:17:16.856595   12118 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:17:16.856766   12118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:16.856769   12118 out.go:358] Setting ErrFile to fd 2...
	I1030 11:17:16.856772   12118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:16.856893   12118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:17:16.858071   12118 out.go:352] Setting JSON to false
	I1030 11:17:16.876704   12118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6407,"bootTime":1730305829,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:17:16.876777   12118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:17:16.881228   12118 out.go:177] * [addons-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:17:16.888250   12118 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:17:16.888288   12118 notify.go:220] Checking for updates...
	I1030 11:17:16.896161   12118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:17:16.899218   12118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:17:16.902214   12118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:17:16.905230   12118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:17:16.908235   12118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:17:16.911374   12118 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:17:16.915164   12118 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:17:16.921191   12118 start.go:297] selected driver: qemu2
	I1030 11:17:16.921200   12118 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:17:16.921208   12118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:17:16.923757   12118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:17:16.926233   12118 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:17:16.929268   12118 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:17:16.929284   12118 cni.go:84] Creating CNI manager for ""
	I1030 11:17:16.929306   12118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:17:16.929310   12118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:17:16.929336   12118 start.go:340] cluster config:
	{Name:addons-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:17:16.934121   12118 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:17:16.942200   12118 out.go:177] * Starting "addons-644000" primary control-plane node in "addons-644000" cluster
	I1030 11:17:16.946240   12118 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:16.946258   12118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:17:16.946267   12118 cache.go:56] Caching tarball of preloaded images
	I1030 11:17:16.946358   12118 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:17:16.946367   12118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:17:16.946618   12118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/addons-644000/config.json ...
	I1030 11:17:16.946630   12118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/addons-644000/config.json: {Name:mk690eb41c7cb696bd30fd58590ae00718948890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:17:16.946994   12118 start.go:360] acquireMachinesLock for addons-644000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:17:16.947082   12118 start.go:364] duration metric: took 82.375µs to acquireMachinesLock for "addons-644000"
	I1030 11:17:16.947094   12118 start.go:93] Provisioning new machine with config: &{Name:addons-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:17:16.947125   12118 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:17:16.955222   12118 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1030 11:17:16.972131   12118 start.go:159] libmachine.API.Create for "addons-644000" (driver="qemu2")
	I1030 11:17:16.972171   12118 client.go:168] LocalClient.Create starting
	I1030 11:17:16.972329   12118 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:17:17.034771   12118 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:17:17.085924   12118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:17:17.244097   12118 main.go:141] libmachine: Creating SSH key...
	I1030 11:17:17.361343   12118 main.go:141] libmachine: Creating Disk image...
	I1030 11:17:17.361349   12118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:17:17.361565   12118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:17.371614   12118 main.go:141] libmachine: STDOUT: 
	I1030 11:17:17.371635   12118 main.go:141] libmachine: STDERR: 
	I1030 11:17:17.371688   12118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2 +20000M
	I1030 11:17:17.380140   12118 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:17:17.380156   12118 main.go:141] libmachine: STDERR: 
	I1030 11:17:17.380168   12118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:17.380175   12118 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:17:17.380214   12118 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:17:17.380248   12118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:65:49:2b:8a:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:17.382047   12118 main.go:141] libmachine: STDOUT: 
	I1030 11:17:17.382060   12118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:17:17.382088   12118 client.go:171] duration metric: took 409.906958ms to LocalClient.Create
	I1030 11:17:19.384244   12118 start.go:128] duration metric: took 2.437128666s to createHost
	I1030 11:17:19.384298   12118 start.go:83] releasing machines lock for "addons-644000", held for 2.43723775s
	W1030 11:17:19.384349   12118 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:17:19.400748   12118 out.go:177] * Deleting "addons-644000" in qemu2 ...
	W1030 11:17:19.425885   12118 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:17:19.425911   12118 start.go:729] Will try again in 5 seconds ...
	I1030 11:17:24.428006   12118 start.go:360] acquireMachinesLock for addons-644000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:17:24.428595   12118 start.go:364] duration metric: took 492.75µs to acquireMachinesLock for "addons-644000"
	I1030 11:17:24.428750   12118 start.go:93] Provisioning new machine with config: &{Name:addons-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:17:24.429085   12118 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:17:24.439725   12118 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1030 11:17:24.489515   12118 start.go:159] libmachine.API.Create for "addons-644000" (driver="qemu2")
	I1030 11:17:24.489565   12118 client.go:168] LocalClient.Create starting
	I1030 11:17:24.489714   12118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:17:24.489813   12118 main.go:141] libmachine: Decoding PEM data...
	I1030 11:17:24.489836   12118 main.go:141] libmachine: Parsing certificate...
	I1030 11:17:24.489914   12118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:17:24.489973   12118 main.go:141] libmachine: Decoding PEM data...
	I1030 11:17:24.489984   12118 main.go:141] libmachine: Parsing certificate...
	I1030 11:17:24.490782   12118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:17:24.660983   12118 main.go:141] libmachine: Creating SSH key...
	I1030 11:17:24.777567   12118 main.go:141] libmachine: Creating Disk image...
	I1030 11:17:24.777572   12118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:17:24.777754   12118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:24.787789   12118 main.go:141] libmachine: STDOUT: 
	I1030 11:17:24.787818   12118 main.go:141] libmachine: STDERR: 
	I1030 11:17:24.787875   12118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2 +20000M
	I1030 11:17:24.796338   12118 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:17:24.796360   12118 main.go:141] libmachine: STDERR: 
	I1030 11:17:24.796374   12118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:24.796379   12118 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:17:24.796387   12118 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:17:24.796436   12118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:5f:40:f9:5e:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/addons-644000/disk.qcow2
	I1030 11:17:24.798275   12118 main.go:141] libmachine: STDOUT: 
	I1030 11:17:24.798288   12118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:17:24.798299   12118 client.go:171] duration metric: took 308.731083ms to LocalClient.Create
	I1030 11:17:26.800458   12118 start.go:128] duration metric: took 2.371373375s to createHost
	I1030 11:17:26.800553   12118 start.go:83] releasing machines lock for "addons-644000", held for 2.371960584s
	W1030 11:17:26.801057   12118 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:17:26.814712   12118 out.go:201] 
	W1030 11:17:26.817849   12118 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:17:26.817895   12118 out.go:270] * 
	* 
	W1030 11:17:26.820712   12118 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:17:26.830766   12118 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-644000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.02s)

                                                
                                    
x
+
TestCertOptions (10.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-978000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-978000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.075775125s)

                                                
                                                
-- stdout --
	* [cert-options-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-978000" primary control-plane node in "cert-options-978000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-978000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-978000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-978000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-978000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (89.761542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-978000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-978000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-978000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-978000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-978000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.815709ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-978000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-978000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-978000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-30 11:29:12.106289 -0700 PDT m=+751.290479543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-978000 -n cert-options-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-978000 -n cert-options-978000: exit status 7 (35.321334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-978000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-978000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-978000
--- FAIL: TestCertOptions (10.36s)

                                                
                                    
x
+
TestCertExpiration (195.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.024898458s)

                                                
                                                
-- stdout --
	* [cert-expiration-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-493000" primary control-plane node in "cert-expiration-493000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-493000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.239490167s)

                                                
                                                
-- stdout --
	* [cert-expiration-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-493000" primary control-plane node in "cert-expiration-493000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-493000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-493000" primary control-plane node in "cert-expiration-493000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-30 11:32:12.03336 -0700 PDT m=+931.219666085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-493000 -n cert-expiration-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-493000 -n cert-expiration-493000: exit status 7 (63.966458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-493000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-493000
--- FAIL: TestCertExpiration (195.41s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-234000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-234000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.848291375s)

                                                
                                                
-- stdout --
	* [docker-flags-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-234000" primary control-plane node in "docker-flags-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:28:51.790710   13583 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:28:51.790851   13583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:51.790855   13583 out.go:358] Setting ErrFile to fd 2...
	I1030 11:28:51.790857   13583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:51.790982   13583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:28:51.792089   13583 out.go:352] Setting JSON to false
	I1030 11:28:51.809959   13583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7102,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:28:51.810025   13583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:28:51.816214   13583 out.go:177] * [docker-flags-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:28:51.823071   13583 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:28:51.823166   13583 notify.go:220] Checking for updates...
	I1030 11:28:51.831185   13583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:28:51.834089   13583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:28:51.838139   13583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:28:51.841180   13583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:28:51.844160   13583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:28:51.847583   13583 config.go:182] Loaded profile config "force-systemd-flag-269000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:51.847658   13583 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:51.847702   13583 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:28:51.851152   13583 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:28:51.858185   13583 start.go:297] selected driver: qemu2
	I1030 11:28:51.858191   13583 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:28:51.858199   13583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:28:51.860723   13583 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:28:51.865202   13583 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:28:51.868199   13583 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1030 11:28:51.868220   13583 cni.go:84] Creating CNI manager for ""
	I1030 11:28:51.868250   13583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:28:51.868254   13583 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:28:51.868293   13583 start.go:340] cluster config:
	{Name:docker-flags-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:28:51.873004   13583 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:28:51.881214   13583 out.go:177] * Starting "docker-flags-234000" primary control-plane node in "docker-flags-234000" cluster
	I1030 11:28:51.885132   13583 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:28:51.885147   13583 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:28:51.885154   13583 cache.go:56] Caching tarball of preloaded images
	I1030 11:28:51.885226   13583 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:28:51.885232   13583 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:28:51.885289   13583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/docker-flags-234000/config.json ...
	I1030 11:28:51.885300   13583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/docker-flags-234000/config.json: {Name:mk3ad0208444be354c5df7f413cd81479b7215da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:28:51.885697   13583 start.go:360] acquireMachinesLock for docker-flags-234000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:51.885750   13583 start.go:364] duration metric: took 45.458µs to acquireMachinesLock for "docker-flags-234000"
	I1030 11:28:51.885764   13583 start.go:93] Provisioning new machine with config: &{Name:docker-flags-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:51.885791   13583 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:51.894190   13583 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:51.912729   13583 start.go:159] libmachine.API.Create for "docker-flags-234000" (driver="qemu2")
	I1030 11:28:51.912758   13583 client.go:168] LocalClient.Create starting
	I1030 11:28:51.912836   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:51.912876   13583 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:51.912887   13583 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:51.912925   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:51.912955   13583 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:51.912962   13583 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:51.913420   13583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:52.073248   13583 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:52.127831   13583 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:52.127836   13583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:52.128044   13583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:52.137858   13583 main.go:141] libmachine: STDOUT: 
	I1030 11:28:52.137879   13583 main.go:141] libmachine: STDERR: 
	I1030 11:28:52.137938   13583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2 +20000M
	I1030 11:28:52.146327   13583 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:52.146343   13583 main.go:141] libmachine: STDERR: 
	I1030 11:28:52.146365   13583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:52.146371   13583 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:52.146383   13583 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:52.146415   13583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:92:57:89:c3:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:52.148221   13583 main.go:141] libmachine: STDOUT: 
	I1030 11:28:52.148232   13583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:52.148257   13583 client.go:171] duration metric: took 235.496292ms to LocalClient.Create
	I1030 11:28:54.150400   13583 start.go:128] duration metric: took 2.264620042s to createHost
	I1030 11:28:54.150545   13583 start.go:83] releasing machines lock for "docker-flags-234000", held for 2.264759875s
	W1030 11:28:54.150645   13583 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:54.175762   13583 out.go:177] * Deleting "docker-flags-234000" in qemu2 ...
	W1030 11:28:54.198019   13583 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:54.198035   13583 start.go:729] Will try again in 5 seconds ...
	I1030 11:28:59.200153   13583 start.go:360] acquireMachinesLock for docker-flags-234000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:59.200446   13583 start.go:364] duration metric: took 238.291µs to acquireMachinesLock for "docker-flags-234000"
	I1030 11:28:59.200511   13583 start.go:93] Provisioning new machine with config: &{Name:docker-flags-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:59.200715   13583 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:59.207241   13583 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:59.248355   13583 start.go:159] libmachine.API.Create for "docker-flags-234000" (driver="qemu2")
	I1030 11:28:59.248420   13583 client.go:168] LocalClient.Create starting
	I1030 11:28:59.248562   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:59.248647   13583 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:59.248670   13583 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:59.248750   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:59.248807   13583 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:59.248824   13583 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:59.249477   13583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:59.421239   13583 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:59.532787   13583 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:59.532793   13583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:59.532998   13583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:59.543015   13583 main.go:141] libmachine: STDOUT: 
	I1030 11:28:59.543036   13583 main.go:141] libmachine: STDERR: 
	I1030 11:28:59.543095   13583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2 +20000M
	I1030 11:28:59.551713   13583 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:59.551730   13583 main.go:141] libmachine: STDERR: 
	I1030 11:28:59.551743   13583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:59.551748   13583 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:59.551757   13583 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:59.551792   13583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5a:42:42:db:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/docker-flags-234000/disk.qcow2
	I1030 11:28:59.553637   13583 main.go:141] libmachine: STDOUT: 
	I1030 11:28:59.553651   13583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:59.553663   13583 client.go:171] duration metric: took 305.241166ms to LocalClient.Create
	I1030 11:29:01.555811   13583 start.go:128] duration metric: took 2.355092583s to createHost
	I1030 11:29:01.555939   13583 start.go:83] releasing machines lock for "docker-flags-234000", held for 2.355499125s
	W1030 11:29:01.556295   13583 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:29:01.568969   13583 out.go:201] 
	W1030 11:29:01.581243   13583 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:29:01.581264   13583 out.go:270] * 
	* 
	W1030 11:29:01.584108   13583 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:29:01.590989   13583 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-234000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-234000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-234000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (93.105209ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-234000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-234000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-234000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-234000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-234000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-234000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.723375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-234000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-234000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-234000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-234000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-30 11:29:01.749902 -0700 PDT m=+740.933970710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-234000 -n docker-flags-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-234000 -n docker-flags-234000: exit status 7 (33.507708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-234000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-234000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (10.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-269000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-269000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.095733042s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-269000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-269000" primary control-plane node in "force-systemd-flag-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:28:46.491607   13562 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:28:46.491770   13562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:46.491774   13562 out.go:358] Setting ErrFile to fd 2...
	I1030 11:28:46.491776   13562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:46.491883   13562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:28:46.493012   13562 out.go:352] Setting JSON to false
	I1030 11:28:46.511464   13562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7097,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:28:46.511536   13562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:28:46.517887   13562 out.go:177] * [force-systemd-flag-269000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:28:46.532171   13562 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:28:46.532194   13562 notify.go:220] Checking for updates...
	I1030 11:28:46.540961   13562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:28:46.548882   13562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:28:46.552034   13562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:28:46.554902   13562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:28:46.557897   13562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:28:46.561337   13562 config.go:182] Loaded profile config "force-systemd-env-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:46.561422   13562 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:46.561467   13562 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:28:46.565876   13562 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:28:46.572893   13562 start.go:297] selected driver: qemu2
	I1030 11:28:46.572899   13562 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:28:46.572905   13562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:28:46.575543   13562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:28:46.578893   13562 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:28:46.581984   13562 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:28:46.582001   13562 cni.go:84] Creating CNI manager for ""
	I1030 11:28:46.582026   13562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:28:46.582031   13562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:28:46.582066   13562 start.go:340] cluster config:
	{Name:force-systemd-flag-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:28:46.586965   13562 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:28:46.593898   13562 out.go:177] * Starting "force-systemd-flag-269000" primary control-plane node in "force-systemd-flag-269000" cluster
	I1030 11:28:46.597919   13562 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:28:46.597936   13562 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:28:46.597945   13562 cache.go:56] Caching tarball of preloaded images
	I1030 11:28:46.598018   13562 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:28:46.598024   13562 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:28:46.598079   13562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/force-systemd-flag-269000/config.json ...
	I1030 11:28:46.598091   13562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/force-systemd-flag-269000/config.json: {Name:mk307f17267b0177129f469241f76e4c41eec2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:28:46.598597   13562 start.go:360] acquireMachinesLock for force-systemd-flag-269000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:46.598652   13562 start.go:364] duration metric: took 47.042µs to acquireMachinesLock for "force-systemd-flag-269000"
	I1030 11:28:46.598668   13562 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:46.598712   13562 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:46.606912   13562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:46.625381   13562 start.go:159] libmachine.API.Create for "force-systemd-flag-269000" (driver="qemu2")
	I1030 11:28:46.625405   13562 client.go:168] LocalClient.Create starting
	I1030 11:28:46.625479   13562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:46.625520   13562 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:46.625536   13562 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:46.625577   13562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:46.625611   13562 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:46.625621   13562 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:46.626025   13562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:46.788086   13562 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:46.961177   13562 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:46.961184   13562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:46.961397   13562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:46.971435   13562 main.go:141] libmachine: STDOUT: 
	I1030 11:28:46.971461   13562 main.go:141] libmachine: STDERR: 
	I1030 11:28:46.971530   13562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2 +20000M
	I1030 11:28:46.979919   13562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:46.979934   13562 main.go:141] libmachine: STDERR: 
	I1030 11:28:46.979954   13562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:46.979960   13562 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:46.979975   13562 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:46.980002   13562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:fe:88:28:5f:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:46.981781   13562 main.go:141] libmachine: STDOUT: 
	I1030 11:28:46.981798   13562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:46.981822   13562 client.go:171] duration metric: took 356.415709ms to LocalClient.Create
	I1030 11:28:48.983981   13562 start.go:128] duration metric: took 2.385278791s to createHost
	I1030 11:28:48.984078   13562 start.go:83] releasing machines lock for "force-systemd-flag-269000", held for 2.385412042s
	W1030 11:28:48.984125   13562 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:49.003302   13562 out.go:177] * Deleting "force-systemd-flag-269000" in qemu2 ...
	W1030 11:28:49.028736   13562 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:49.028757   13562 start.go:729] Will try again in 5 seconds ...
	I1030 11:28:54.030942   13562 start.go:360] acquireMachinesLock for force-systemd-flag-269000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:54.150700   13562 start.go:364] duration metric: took 119.6145ms to acquireMachinesLock for "force-systemd-flag-269000"
	I1030 11:28:54.150846   13562 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:54.151150   13562 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:54.160813   13562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:54.207310   13562 start.go:159] libmachine.API.Create for "force-systemd-flag-269000" (driver="qemu2")
	I1030 11:28:54.207354   13562 client.go:168] LocalClient.Create starting
	I1030 11:28:54.207491   13562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:54.207572   13562 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:54.207591   13562 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:54.207647   13562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:54.207703   13562 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:54.207714   13562 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:54.208235   13562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:54.382382   13562 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:54.489530   13562 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:54.489535   13562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:54.489733   13562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:54.499913   13562 main.go:141] libmachine: STDOUT: 
	I1030 11:28:54.499927   13562 main.go:141] libmachine: STDERR: 
	I1030 11:28:54.499992   13562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2 +20000M
	I1030 11:28:54.508441   13562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:54.508454   13562 main.go:141] libmachine: STDERR: 
	I1030 11:28:54.508468   13562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:54.508479   13562 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:54.508489   13562 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:54.508516   13562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:81:b0:ed:df:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-flag-269000/disk.qcow2
	I1030 11:28:54.510322   13562 main.go:141] libmachine: STDOUT: 
	I1030 11:28:54.510337   13562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:54.510350   13562 client.go:171] duration metric: took 302.993375ms to LocalClient.Create
	I1030 11:28:56.511642   13562 start.go:128] duration metric: took 2.360491917s to createHost
	I1030 11:28:56.511704   13562 start.go:83] releasing machines lock for "force-systemd-flag-269000", held for 2.361004208s
	W1030 11:28:56.512043   13562 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:56.526706   13562 out.go:201] 
	W1030 11:28:56.532705   13562 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:28:56.532746   13562 out.go:270] * 
	* 
	W1030 11:28:56.535457   13562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:28:56.541033   13562 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-269000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-269000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-269000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (87.466708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-269000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-269000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-269000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-30 11:28:56.646645 -0700 PDT m=+735.830654126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-269000 -n force-systemd-flag-269000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-269000 -n force-systemd-flag-269000: exit status 7 (36.688708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-269000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-269000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-269000
--- FAIL: TestForceSystemdFlag (10.31s)

                                                
                                    
x
+
TestForceSystemdEnv (10.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-842000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1030 11:28:41.513171   12043 install.go:79] stdout: 
W1030 11:28:41.513327   12043 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1030 11:28:41.513345   12043 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit]
I1030 11:28:41.526009   12043 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit]
I1030 11:28:41.540794   12043 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit]
I1030 11:28:41.552523   12043 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit]
I1030 11:28:41.574680   12043 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 11:28:41.574812   12043 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1030 11:28:43.373746   12043 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1030 11:28:43.373764   12043 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1030 11:28:43.373808   12043 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1030 11:28:43.373846   12043 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit
I1030 11:28:43.762814   12043 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700] Decompressors:map[bz2:0x1400070b5d0 gz:0x1400070b5d8 tar:0x1400070b4a0 tar.bz2:0x1400070b4c0 tar.gz:0x1400070b520 tar.xz:0x1400070b570 tar.zst:0x1400070b5a0 tbz2:0x1400070b4c0 tgz:0x1400070b520 txz:0x1400070b570 tzst:0x1400070b5a0 xz:0x1400070b5e0 zip:0x1400070b600 zst:0x1400070b5e8] Getters:map[file:0x140004fdc70 http:0x1400002aa00 https:0x1400002aa50] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1030 11:28:43.762931   12043 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit
I1030 11:28:46.408288   12043 install.go:79] stdout: 
W1030 11:28:46.408474   12043 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1030 11:28:46.408500   12043 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit]
I1030 11:28:46.425392   12043 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit]
I1030 11:28:46.438306   12043 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit]
I1030 11:28:46.448928   12043 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-842000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0414665s)

                                                
                                                
-- stdout --
	* [force-systemd-env-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-842000" primary control-plane node in "force-systemd-env-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:28:41.543273   13530 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:28:41.543446   13530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:41.543449   13530 out.go:358] Setting ErrFile to fd 2...
	I1030 11:28:41.543451   13530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:28:41.543577   13530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:28:41.544942   13530 out.go:352] Setting JSON to false
	I1030 11:28:41.563704   13530 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7092,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:28:41.563794   13530 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:28:41.569479   13530 out.go:177] * [force-systemd-env-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:28:41.577484   13530 notify.go:220] Checking for updates...
	I1030 11:28:41.580450   13530 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:28:41.583403   13530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:28:41.587460   13530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:28:41.591240   13530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:28:41.594397   13530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:28:41.597426   13530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1030 11:28:41.600789   13530 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:28:41.600837   13530 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:28:41.605357   13530 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:28:41.612388   13530 start.go:297] selected driver: qemu2
	I1030 11:28:41.612394   13530 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:28:41.612400   13530 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:28:41.615032   13530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:28:41.618424   13530 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:28:41.622473   13530 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:28:41.622491   13530 cni.go:84] Creating CNI manager for ""
	I1030 11:28:41.622512   13530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:28:41.622519   13530 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:28:41.622551   13530 start.go:340] cluster config:
	{Name:force-systemd-env-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:28:41.627514   13530 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:28:41.635426   13530 out.go:177] * Starting "force-systemd-env-842000" primary control-plane node in "force-systemd-env-842000" cluster
	I1030 11:28:41.639343   13530 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:28:41.639374   13530 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:28:41.639382   13530 cache.go:56] Caching tarball of preloaded images
	I1030 11:28:41.639482   13530 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:28:41.639490   13530 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:28:41.639563   13530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/force-systemd-env-842000/config.json ...
	I1030 11:28:41.639576   13530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/force-systemd-env-842000/config.json: {Name:mk7c9719040e007f95e2edcb865304442f76dc73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:28:41.639976   13530 start.go:360] acquireMachinesLock for force-systemd-env-842000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:41.640029   13530 start.go:364] duration metric: took 44.083µs to acquireMachinesLock for "force-systemd-env-842000"
	I1030 11:28:41.640045   13530 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:41.640094   13530 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:41.648431   13530 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:41.665566   13530 start.go:159] libmachine.API.Create for "force-systemd-env-842000" (driver="qemu2")
	I1030 11:28:41.665606   13530 client.go:168] LocalClient.Create starting
	I1030 11:28:41.665680   13530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:41.665720   13530 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:41.665733   13530 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:41.665771   13530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:41.665804   13530 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:41.665813   13530 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:41.666160   13530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:41.822532   13530 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:41.864955   13530 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:41.864961   13530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:41.865167   13530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:41.875303   13530 main.go:141] libmachine: STDOUT: 
	I1030 11:28:41.875322   13530 main.go:141] libmachine: STDERR: 
	I1030 11:28:41.875377   13530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2 +20000M
	I1030 11:28:41.884314   13530 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:41.884336   13530 main.go:141] libmachine: STDERR: 
	I1030 11:28:41.884355   13530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:41.884363   13530 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:41.884392   13530 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:41.884417   13530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:67:ed:26:ff:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:41.886305   13530 main.go:141] libmachine: STDOUT: 
	I1030 11:28:41.886322   13530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:41.886343   13530 client.go:171] duration metric: took 220.734042ms to LocalClient.Create
	I1030 11:28:43.888534   13530 start.go:128] duration metric: took 2.248439833s to createHost
	I1030 11:28:43.888619   13530 start.go:83] releasing machines lock for "force-systemd-env-842000", held for 2.248604625s
	W1030 11:28:43.888683   13530 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:43.904917   13530 out.go:177] * Deleting "force-systemd-env-842000" in qemu2 ...
	W1030 11:28:43.934115   13530 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:43.934155   13530 start.go:729] Will try again in 5 seconds ...
	I1030 11:28:48.936331   13530 start.go:360] acquireMachinesLock for force-systemd-env-842000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:48.984176   13530 start.go:364] duration metric: took 47.736917ms to acquireMachinesLock for "force-systemd-env-842000"
	I1030 11:28:48.984322   13530 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:48.984645   13530 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:48.994342   13530 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1030 11:28:49.041984   13530 start.go:159] libmachine.API.Create for "force-systemd-env-842000" (driver="qemu2")
	I1030 11:28:49.042033   13530 client.go:168] LocalClient.Create starting
	I1030 11:28:49.042178   13530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:49.042251   13530 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:49.042269   13530 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:49.042332   13530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:49.042388   13530 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:49.042400   13530 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:49.043013   13530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:49.218563   13530 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:49.475389   13530 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:49.475402   13530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:49.475639   13530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:49.485897   13530 main.go:141] libmachine: STDOUT: 
	I1030 11:28:49.485920   13530 main.go:141] libmachine: STDERR: 
	I1030 11:28:49.486000   13530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2 +20000M
	I1030 11:28:49.494525   13530 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:49.494541   13530 main.go:141] libmachine: STDERR: 
	I1030 11:28:49.494556   13530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:49.494559   13530 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:49.494574   13530 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:49.494599   13530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:51:46:83:d7:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/force-systemd-env-842000/disk.qcow2
	I1030 11:28:49.496344   13530 main.go:141] libmachine: STDOUT: 
	I1030 11:28:49.496359   13530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:49.496373   13530 client.go:171] duration metric: took 454.340375ms to LocalClient.Create
	I1030 11:28:51.498512   13530 start.go:128] duration metric: took 2.513869042s to createHost
	I1030 11:28:51.498575   13530 start.go:83] releasing machines lock for "force-systemd-env-842000", held for 2.514407709s
	W1030 11:28:51.498954   13530 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:51.515412   13530 out.go:201] 
	W1030 11:28:51.522264   13530 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:28:51.522294   13530 out.go:270] * 
	* 
	W1030 11:28:51.524861   13530 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:28:51.534182   13530 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-842000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-842000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-842000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (91.512ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-842000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-842000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-842000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-30 11:28:51.644301 -0700 PDT m=+730.828251251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-842000 -n force-systemd-env-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-842000 -n force-systemd-env-842000: exit status 7 (36.728291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-842000
--- FAIL: TestForceSystemdEnv (10.25s)

                                                
                                    
x
+
TestErrorSpam/setup (9.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-957000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-957000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 --driver=qemu2 : exit status 80 (9.797537958s)

                                                
                                                
-- stdout --
	* [nospam-957000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-957000" primary control-plane node in "nospam-957000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-957000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-957000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-957000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-957000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19883
- KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-957000" primary control-plane node in "nospam-957000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-957000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.80s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-484000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.995436416s)

                                                
                                                
-- stdout --
	* [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-484000" primary control-plane node in "functional-484000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-484000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-484000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19883
- KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-484000" primary control-plane node in "functional-484000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-484000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:57010 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (73.398708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.07s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1030 11:17:56.648345   12043 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-484000 --alsologtostderr -v=8: exit status 80 (5.21300425s)

                                                
                                                
-- stdout --
	* [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-484000" primary control-plane node in "functional-484000" cluster
	* Restarting existing qemu2 VM for "functional-484000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-484000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:17:56.682241   12260 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:17:56.682410   12260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:56.682413   12260 out.go:358] Setting ErrFile to fd 2...
	I1030 11:17:56.682415   12260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:56.682549   12260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:17:56.683693   12260 out.go:352] Setting JSON to false
	I1030 11:17:56.701386   12260 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6447,"bootTime":1730305829,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:17:56.701462   12260 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:17:56.706847   12260 out.go:177] * [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:17:56.712705   12260 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:17:56.712743   12260 notify.go:220] Checking for updates...
	I1030 11:17:56.720684   12260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:17:56.724723   12260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:17:56.727746   12260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:17:56.730668   12260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:17:56.733677   12260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:17:56.737035   12260 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:17:56.737089   12260 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:17:56.741700   12260 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:17:56.748744   12260 start.go:297] selected driver: qemu2
	I1030 11:17:56.748752   12260 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:17:56.748805   12260 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:17:56.751321   12260 cni.go:84] Creating CNI manager for ""
	I1030 11:17:56.751362   12260 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:17:56.751416   12260 start.go:340] cluster config:
	{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:17:56.755963   12260 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:17:56.762680   12260 out.go:177] * Starting "functional-484000" primary control-plane node in "functional-484000" cluster
	I1030 11:17:56.766695   12260 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:56.766714   12260 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:17:56.766723   12260 cache.go:56] Caching tarball of preloaded images
	I1030 11:17:56.766816   12260 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:17:56.766822   12260 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:17:56.766876   12260 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/functional-484000/config.json ...
	I1030 11:17:56.767350   12260 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:17:56.767382   12260 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "functional-484000"
	I1030 11:17:56.767391   12260 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:17:56.767395   12260 fix.go:54] fixHost starting: 
	I1030 11:17:56.767523   12260 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
	W1030 11:17:56.767530   12260 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:17:56.774724   12260 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
	I1030 11:17:56.790751   12260 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:17:56.790797   12260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
	I1030 11:17:56.793242   12260 main.go:141] libmachine: STDOUT: 
	I1030 11:17:56.793265   12260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:17:56.793295   12260 fix.go:56] duration metric: took 25.898375ms for fixHost
	I1030 11:17:56.793299   12260 start.go:83] releasing machines lock for "functional-484000", held for 25.912666ms
	W1030 11:17:56.793305   12260 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:17:56.793364   12260 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:17:56.793368   12260 start.go:729] Will try again in 5 seconds ...
	I1030 11:18:01.795445   12260 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:18:01.795755   12260 start.go:364] duration metric: took 256.25µs to acquireMachinesLock for "functional-484000"
	I1030 11:18:01.795898   12260 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:18:01.795922   12260 fix.go:54] fixHost starting: 
	I1030 11:18:01.796553   12260 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
	W1030 11:18:01.796583   12260 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:18:01.803966   12260 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
	I1030 11:18:01.807881   12260 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:18:01.808048   12260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
	I1030 11:18:01.817697   12260 main.go:141] libmachine: STDOUT: 
	I1030 11:18:01.817774   12260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:18:01.817855   12260 fix.go:56] duration metric: took 21.934292ms for fixHost
	I1030 11:18:01.817916   12260 start.go:83] releasing machines lock for "functional-484000", held for 22.096667ms
	W1030 11:18:01.818124   12260 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:18:01.833868   12260 out.go:201] 
	W1030 11:18:01.837928   12260 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:18:01.837955   12260 out.go:270] * 
	* 
	W1030 11:18:01.840771   12260 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:18:01.847904   12260 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-484000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.214802833s for "functional-484000" cluster.
I1030 11:18:01.863362   12043 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (73.841209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.77075ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-484000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (35.467958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-484000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-484000 get po -A: exit status 1 (27.002584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-484000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-484000\n"*: args "kubectl --context functional-484000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-484000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (34.68725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl images: exit status 83 (47.777917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.792375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-484000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (47.043ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (46.9755ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-484000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 kubectl -- --context functional-484000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 kubectl -- --context functional-484000 get pods: exit status 1 (1.827466625s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-484000
	* no server found for cluster "functional-484000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-484000 kubectl -- --context functional-484000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (36.017292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-484000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-484000 get pods: exit status 1 (1.165897542s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-484000
	* no server found for cluster "functional-484000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-484000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (33.576208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.20s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-484000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.195278s)

                                                
                                                
-- stdout --
	* [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-484000" primary control-plane node in "functional-484000" cluster
	* Restarting existing qemu2 VM for "functional-484000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-484000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-484000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.195778166s for "functional-484000" cluster.
I1030 11:18:13.798320   12043 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (75.734875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-484000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-484000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.267125ms)

                                                
                                                
** stderr ** 
	error: context "functional-484000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-484000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (34.855125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 logs: exit status 83 (81.735666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:16 PDT |                     |
	|         | -p download-only-089000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| start   | -o=json --download-only                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | -p download-only-276000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| start   | --download-only -p                                                       | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | binary-mirror-200000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:56977                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-200000                                                  | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| addons  | disable dashboard -p                                                     | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | addons-644000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | addons-644000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-644000 --wait=true                                             | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-644000                                                         | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| start   | -p nospam-957000 -n=1 --memory=2250 --wait=false                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-957000                                                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
	| cache   | functional-484000 cache delete                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	| ssh     | functional-484000 ssh sudo                                               | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-484000                                                        | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-484000 cache reload                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-484000 kubectl --                                             | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | --context functional-484000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 11:18:08
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 11:18:08.633245   12335 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:08.633384   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:08.633386   12335 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:08.633388   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:08.633493   12335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:08.634801   12335 out.go:352] Setting JSON to false
	I1030 11:18:08.652438   12335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6459,"bootTime":1730305829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:18:08.652504   12335 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:18:08.658162   12335 out.go:177] * [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:18:08.666080   12335 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:18:08.666165   12335 notify.go:220] Checking for updates...
	I1030 11:18:08.675061   12335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:18:08.678145   12335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:18:08.681116   12335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:18:08.684104   12335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:18:08.687101   12335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:18:08.690344   12335 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:08.690393   12335 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:18:08.695080   12335 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:18:08.702073   12335 start.go:297] selected driver: qemu2
	I1030 11:18:08.702078   12335 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:18:08.702141   12335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:18:08.704660   12335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:18:08.704680   12335 cni.go:84] Creating CNI manager for ""
	I1030 11:18:08.704707   12335 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:18:08.704746   12335 start.go:340] cluster config:
	{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:18:08.709273   12335 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:18:08.716972   12335 out.go:177] * Starting "functional-484000" primary control-plane node in "functional-484000" cluster
	I1030 11:18:08.721107   12335 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:18:08.721121   12335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:18:08.721130   12335 cache.go:56] Caching tarball of preloaded images
	I1030 11:18:08.721208   12335 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:18:08.721218   12335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:18:08.721285   12335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/functional-484000/config.json ...
	I1030 11:18:08.721729   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:18:08.721776   12335 start.go:364] duration metric: took 43.167µs to acquireMachinesLock for "functional-484000"
	I1030 11:18:08.721783   12335 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:18:08.721786   12335 fix.go:54] fixHost starting: 
	I1030 11:18:08.721904   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
	W1030 11:18:08.721911   12335 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:18:08.729138   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
	I1030 11:18:08.733025   12335 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:18:08.733058   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
	I1030 11:18:08.735346   12335 main.go:141] libmachine: STDOUT: 
	I1030 11:18:08.735362   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:18:08.735393   12335 fix.go:56] duration metric: took 13.604917ms for fixHost
	I1030 11:18:08.735397   12335 start.go:83] releasing machines lock for "functional-484000", held for 13.618041ms
	W1030 11:18:08.735402   12335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:18:08.735447   12335 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:18:08.735452   12335 start.go:729] Will try again in 5 seconds ...
	I1030 11:18:13.737681   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:18:13.738156   12335 start.go:364] duration metric: took 388.25µs to acquireMachinesLock for "functional-484000"
	I1030 11:18:13.738337   12335 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:18:13.738353   12335 fix.go:54] fixHost starting: 
	I1030 11:18:13.739148   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
	W1030 11:18:13.739165   12335 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:18:13.747781   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
	I1030 11:18:13.751770   12335 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:18:13.752073   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
	I1030 11:18:13.762446   12335 main.go:141] libmachine: STDOUT: 
	I1030 11:18:13.762493   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:18:13.762579   12335 fix.go:56] duration metric: took 24.232417ms for fixHost
	I1030 11:18:13.762592   12335 start.go:83] releasing machines lock for "functional-484000", held for 24.411834ms
	W1030 11:18:13.762812   12335 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:18:13.769699   12335 out.go:201] 
	W1030 11:18:13.773935   12335 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:18:13.773965   12335 out.go:270] * 
	W1030 11:18:13.776576   12335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:18:13.784836   12335 out.go:201] 
	
	
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-484000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:16 PDT |                     |
|         | -p download-only-089000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -o=json --download-only                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | -p download-only-276000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | --download-only -p                                                       | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | binary-mirror-200000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:56977                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-200000                                                  | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| addons  | disable dashboard -p                                                     | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | addons-644000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | addons-644000                                                            |                      |         |         |                     |                     |
| start   | -p addons-644000 --wait=true                                             | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-644000                                                         | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -p nospam-957000 -n=1 --memory=2250 --wait=false                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-957000                                                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
| cache   | functional-484000 cache delete                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
| ssh     | functional-484000 ssh sudo                                               | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-484000                                                        | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-484000 cache reload                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-484000 kubectl --                                             | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | --context functional-484000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/30 11:18:08
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1030 11:18:08.633245   12335 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:08.633384   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:08.633386   12335 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:08.633388   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:08.633493   12335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:08.634801   12335 out.go:352] Setting JSON to false
I1030 11:18:08.652438   12335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6459,"bootTime":1730305829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1030 11:18:08.652504   12335 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1030 11:18:08.658162   12335 out.go:177] * [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1030 11:18:08.666080   12335 out.go:177]   - MINIKUBE_LOCATION=19883
I1030 11:18:08.666165   12335 notify.go:220] Checking for updates...
I1030 11:18:08.675061   12335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
I1030 11:18:08.678145   12335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1030 11:18:08.681116   12335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1030 11:18:08.684104   12335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
I1030 11:18:08.687101   12335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1030 11:18:08.690344   12335 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:08.690393   12335 driver.go:394] Setting default libvirt URI to qemu:///system
I1030 11:18:08.695080   12335 out.go:177] * Using the qemu2 driver based on existing profile
I1030 11:18:08.702073   12335 start.go:297] selected driver: qemu2
I1030 11:18:08.702078   12335 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1030 11:18:08.702141   12335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1030 11:18:08.704660   12335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1030 11:18:08.704680   12335 cni.go:84] Creating CNI manager for ""
I1030 11:18:08.704707   12335 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1030 11:18:08.704746   12335 start.go:340] cluster config:
{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1030 11:18:08.709273   12335 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 11:18:08.716972   12335 out.go:177] * Starting "functional-484000" primary control-plane node in "functional-484000" cluster
I1030 11:18:08.721107   12335 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1030 11:18:08.721121   12335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1030 11:18:08.721130   12335 cache.go:56] Caching tarball of preloaded images
I1030 11:18:08.721208   12335 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1030 11:18:08.721218   12335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1030 11:18:08.721285   12335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/functional-484000/config.json ...
I1030 11:18:08.721729   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1030 11:18:08.721776   12335 start.go:364] duration metric: took 43.167µs to acquireMachinesLock for "functional-484000"
I1030 11:18:08.721783   12335 start.go:96] Skipping create...Using existing machine configuration
I1030 11:18:08.721786   12335 fix.go:54] fixHost starting: 
I1030 11:18:08.721904   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
W1030 11:18:08.721911   12335 fix.go:138] unexpected machine state, will restart: <nil>
I1030 11:18:08.729138   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
I1030 11:18:08.733025   12335 qemu.go:418] Using hvf for hardware acceleration
I1030 11:18:08.733058   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
I1030 11:18:08.735346   12335 main.go:141] libmachine: STDOUT: 
I1030 11:18:08.735362   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1030 11:18:08.735393   12335 fix.go:56] duration metric: took 13.604917ms for fixHost
I1030 11:18:08.735397   12335 start.go:83] releasing machines lock for "functional-484000", held for 13.618041ms
W1030 11:18:08.735402   12335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1030 11:18:08.735447   12335 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1030 11:18:08.735452   12335 start.go:729] Will try again in 5 seconds ...
I1030 11:18:13.737681   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1030 11:18:13.738156   12335 start.go:364] duration metric: took 388.25µs to acquireMachinesLock for "functional-484000"
I1030 11:18:13.738337   12335 start.go:96] Skipping create...Using existing machine configuration
I1030 11:18:13.738353   12335 fix.go:54] fixHost starting: 
I1030 11:18:13.739148   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
W1030 11:18:13.739165   12335 fix.go:138] unexpected machine state, will restart: <nil>
I1030 11:18:13.747781   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
I1030 11:18:13.751770   12335 qemu.go:418] Using hvf for hardware acceleration
I1030 11:18:13.752073   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
I1030 11:18:13.762446   12335 main.go:141] libmachine: STDOUT: 
I1030 11:18:13.762493   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1030 11:18:13.762579   12335 fix.go:56] duration metric: took 24.232417ms for fixHost
I1030 11:18:13.762592   12335 start.go:83] releasing machines lock for "functional-484000", held for 24.411834ms
W1030 11:18:13.762812   12335 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1030 11:18:13.769699   12335 out.go:201] 
W1030 11:18:13.773935   12335 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1030 11:18:13.773965   12335 out.go:270] * 
W1030 11:18:13.776576   12335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1030 11:18:13.784836   12335 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd503736706/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:16 PDT |                     |
|         | -p download-only-089000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -o=json --download-only                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | -p download-only-276000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-089000                                                  | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| delete  | -p download-only-276000                                                  | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | --download-only -p                                                       | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | binary-mirror-200000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:56977                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-200000                                                  | binary-mirror-200000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| addons  | disable dashboard -p                                                     | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | addons-644000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | addons-644000                                                            |                      |         |         |                     |                     |
| start   | -p addons-644000 --wait=true                                             | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-644000                                                         | addons-644000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -p nospam-957000 -n=1 --memory=2250 --wait=false                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-957000 --log_dir                                                  | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-957000                                                         | nospam-957000        | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-484000 cache add                                              | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
| cache   | functional-484000 cache delete                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | minikube-local-cache-test:functional-484000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
| ssh     | functional-484000 ssh sudo                                               | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-484000                                                        | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-484000 cache reload                                           | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
| ssh     | functional-484000 ssh                                                    | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT | 30 Oct 24 11:18 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-484000 kubectl --                                             | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | --context functional-484000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-484000                                                     | functional-484000    | jenkins | v1.34.0 | 30 Oct 24 11:18 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/30 11:18:08
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1030 11:18:08.633245   12335 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:08.633384   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:08.633386   12335 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:08.633388   12335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:08.633493   12335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:08.634801   12335 out.go:352] Setting JSON to false
I1030 11:18:08.652438   12335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6459,"bootTime":1730305829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1030 11:18:08.652504   12335 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1030 11:18:08.658162   12335 out.go:177] * [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1030 11:18:08.666080   12335 out.go:177]   - MINIKUBE_LOCATION=19883
I1030 11:18:08.666165   12335 notify.go:220] Checking for updates...
I1030 11:18:08.675061   12335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
I1030 11:18:08.678145   12335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1030 11:18:08.681116   12335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1030 11:18:08.684104   12335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
I1030 11:18:08.687101   12335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1030 11:18:08.690344   12335 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:08.690393   12335 driver.go:394] Setting default libvirt URI to qemu:///system
I1030 11:18:08.695080   12335 out.go:177] * Using the qemu2 driver based on existing profile
I1030 11:18:08.702073   12335 start.go:297] selected driver: qemu2
I1030 11:18:08.702078   12335 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1030 11:18:08.702141   12335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1030 11:18:08.704660   12335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1030 11:18:08.704680   12335 cni.go:84] Creating CNI manager for ""
I1030 11:18:08.704707   12335 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1030 11:18:08.704746   12335 start.go:340] cluster config:
{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1030 11:18:08.709273   12335 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 11:18:08.716972   12335 out.go:177] * Starting "functional-484000" primary control-plane node in "functional-484000" cluster
I1030 11:18:08.721107   12335 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1030 11:18:08.721121   12335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1030 11:18:08.721130   12335 cache.go:56] Caching tarball of preloaded images
I1030 11:18:08.721208   12335 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1030 11:18:08.721218   12335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1030 11:18:08.721285   12335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/functional-484000/config.json ...
I1030 11:18:08.721729   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1030 11:18:08.721776   12335 start.go:364] duration metric: took 43.167µs to acquireMachinesLock for "functional-484000"
I1030 11:18:08.721783   12335 start.go:96] Skipping create...Using existing machine configuration
I1030 11:18:08.721786   12335 fix.go:54] fixHost starting: 
I1030 11:18:08.721904   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
W1030 11:18:08.721911   12335 fix.go:138] unexpected machine state, will restart: <nil>
I1030 11:18:08.729138   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
I1030 11:18:08.733025   12335 qemu.go:418] Using hvf for hardware acceleration
I1030 11:18:08.733058   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
I1030 11:18:08.735346   12335 main.go:141] libmachine: STDOUT: 
I1030 11:18:08.735362   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1030 11:18:08.735393   12335 fix.go:56] duration metric: took 13.604917ms for fixHost
I1030 11:18:08.735397   12335 start.go:83] releasing machines lock for "functional-484000", held for 13.618041ms
W1030 11:18:08.735402   12335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1030 11:18:08.735447   12335 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1030 11:18:08.735452   12335 start.go:729] Will try again in 5 seconds ...
I1030 11:18:13.737681   12335 start.go:360] acquireMachinesLock for functional-484000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1030 11:18:13.738156   12335 start.go:364] duration metric: took 388.25µs to acquireMachinesLock for "functional-484000"
I1030 11:18:13.738337   12335 start.go:96] Skipping create...Using existing machine configuration
I1030 11:18:13.738353   12335 fix.go:54] fixHost starting: 
I1030 11:18:13.739148   12335 fix.go:112] recreateIfNeeded on functional-484000: state=Stopped err=<nil>
W1030 11:18:13.739165   12335 fix.go:138] unexpected machine state, will restart: <nil>
I1030 11:18:13.747781   12335 out.go:177] * Restarting existing qemu2 VM for "functional-484000" ...
I1030 11:18:13.751770   12335 qemu.go:418] Using hvf for hardware acceleration
I1030 11:18:13.752073   12335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c4:8b:0e:9b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/functional-484000/disk.qcow2
I1030 11:18:13.762446   12335 main.go:141] libmachine: STDOUT: 
I1030 11:18:13.762493   12335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1030 11:18:13.762579   12335 fix.go:56] duration metric: took 24.232417ms for fixHost
I1030 11:18:13.762592   12335 start.go:83] releasing machines lock for "functional-484000", held for 24.411834ms
W1030 11:18:13.762812   12335 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-484000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1030 11:18:13.769699   12335 out.go:201] 
W1030 11:18:13.773935   12335 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1030 11:18:13.773965   12335 out.go:270] * 
W1030 11:18:13.776576   12335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1030 11:18:13.784836   12335 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-484000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-484000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.162083ms)

                                                
                                                
** stderr ** 
	error: context "functional-484000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-484000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-484000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-484000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-484000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-484000 --alsologtostderr -v=1] stderr:
I1030 11:18:49.971857   12530 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:49.972270   12530 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:49.972274   12530 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:49.972277   12530 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:49.972432   12530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:49.972722   12530 mustload.go:65] Loading cluster: functional-484000
I1030 11:18:49.972952   12530 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:49.976214   12530 out.go:177] * The control-plane node functional-484000 host is not running: state=Stopped
I1030 11:18:49.980150   12530 out.go:177]   To start a cluster, run: "minikube start -p functional-484000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (47.683583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 status: exit status 7 (79.237792ms)

                                                
                                                
-- stdout --
	functional-484000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-484000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (36.922833ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-484000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 status -o json: exit status 7 (35.197042ms)

                                                
                                                
-- stdout --
	{"Name":"functional-484000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-484000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (34.214833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-484000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-484000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.240625ms)

                                                
                                                
** stderr ** 
	error: context "functional-484000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-484000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-484000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-484000 describe po hello-node-connect: exit status 1 (26.814042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-484000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-484000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-484000 logs -l app=hello-node-connect: exit status 1 (26.677375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-484000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-484000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-484000 describe svc hello-node-connect: exit status 1 (27.1555ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-484000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (35.257542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-484000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (39.307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "echo hello": exit status 83 (46.7475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n"*. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "cat /etc/hostname": exit status 83 (45.499625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-484000"- but got *"* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n"*. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (42.282125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (59.774958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /home/docker/cp-test.txt": exit status 83 (47.983292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-484000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-484000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cp functional-484000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2785788208/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 cp functional-484000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2785788208/001/cp-test.txt: exit status 83 (47.4485ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 cp functional-484000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2785788208/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.851084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2785788208/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (55.929125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (49.762458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-484000 ssh -n functional-484000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-484000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-484000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12043/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/test/nested/copy/12043/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/test/nested/copy/12043/hosts": exit status 83 (42.806417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/test/nested/copy/12043/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-484000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-484000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (35.020292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12043.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/12043.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/12043.pem": exit status 83 (53.48625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/12043.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /etc/ssl/certs/12043.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/12043.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12043.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /usr/share/ca-certificates/12043.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /usr/share/ca-certificates/12043.pem": exit status 83 (49.688708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/12043.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /usr/share/ca-certificates/12043.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/12043.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.814375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/120432.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/120432.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/120432.pem": exit status 83 (45.699291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/120432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /etc/ssl/certs/120432.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/120432.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/120432.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /usr/share/ca-certificates/120432.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /usr/share/ca-certificates/120432.pem": exit status 83 (46.672834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/120432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /usr/share/ca-certificates/120432.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/120432.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (44.537875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-484000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-484000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (33.853084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-484000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-484000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.865125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-484000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-484000 -n functional-484000: exit status 7 (34.188292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-484000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo systemctl is-active crio": exit status 83 (46.349416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1030 11:18:14.485736   12386 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:14.485954   12386 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:14.485957   12386 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:14.485959   12386 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:14.486093   12386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:14.486400   12386 mustload.go:65] Loading cluster: functional-484000
I1030 11:18:14.486636   12386 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:14.489884   12386 out.go:177] * The control-plane node functional-484000 host is not running: state=Stopped
I1030 11:18:14.498996   12386 out.go:177]   To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
stdout: * The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 12385: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-484000": client config: context "functional-484000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-484000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-484000 get svc nginx-svc: exit status 1 (70.981291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-484000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-484000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-484000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-484000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.998458ms)

                                                
                                                
** stderr ** 
	error: context "functional-484000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-484000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 service list: exit status 83 (48.702459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-484000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 service list -o json: exit status 83 (44.836541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-484000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 service --namespace=default --https --url hello-node: exit status 83 (45.847334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-484000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 service hello-node --url --format={{.IP}}: exit status 83 (47.885083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-484000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 service hello-node --url: exit status 83 (46.745208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-484000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:1569: failed to parse "* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"": parse "* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 version -o=json --components: exit status 83 (46.930167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-484000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-484000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-484000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-484000 image ls --format short --alsologtostderr:
I1030 11:18:54.817872   12652 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:54.818044   12652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.818048   12652 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:54.818051   12652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.818189   12652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:54.818615   12652 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:54.818675   12652 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-484000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-484000 image ls --format table --alsologtostderr:
I1030 11:18:55.058686   12670 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:55.058860   12670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:55.058864   12670 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:55.058866   12670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:55.058994   12670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:55.059375   12670 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:55.059438   12670 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-484000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-484000 image ls --format json --alsologtostderr:
I1030 11:18:55.019366   12668 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:55.019534   12668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:55.019538   12668 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:55.019541   12668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:55.019666   12668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:55.020062   12668 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:55.020126   12668 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-484000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-484000 image ls --format yaml --alsologtostderr:
I1030 11:18:54.859507   12657 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:54.859696   12657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.859700   12657 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:54.859702   12657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.859832   12657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:54.860238   12657 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:54.860303   12657 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh pgrep buildkitd: exit status 83 (43.281542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image build -t localhost/my-image:functional-484000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-484000 image build -t localhost/my-image:functional-484000 testdata/build --alsologtostderr:
I1030 11:18:54.942658   12661 out.go:345] Setting OutFile to fd 1 ...
I1030 11:18:54.943653   12661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.943657   12661 out.go:358] Setting ErrFile to fd 2...
I1030 11:18:54.943659   12661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:18:54.943791   12661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:18:54.944178   12661 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:54.944605   12661 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:18:54.944829   12661 build_images.go:133] succeeded building to: 
I1030 11:18:54.944833   12661 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
functional_test.go:446: expected "localhost/my-image:functional-484000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image load --daemon kicbase/echo-server:functional-484000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-484000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image load --daemon kicbase/echo-server:functional-484000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-484000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-484000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image load --daemon kicbase/echo-server:functional-484000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-484000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image save kicbase/echo-server:functional-484000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-484000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-484000 docker-env) && out/minikube-darwin-arm64 status -p functional-484000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-484000 docker-env) && out/minikube-darwin-arm64 status -p functional-484000": exit status 1 (51.908375ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2: exit status 83 (46.909333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:18:55.098040   12672 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:55.098847   12672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.098851   12672 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:55.098853   12672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.098981   12672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:55.099178   12672 mustload.go:65] Loading cluster: functional-484000
	I1030 11:18:55.099380   12672 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:55.103911   12672 out.go:177] * The control-plane node functional-484000 host is not running: state=Stopped
	I1030 11:18:55.107913   12672 out.go:177]   To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2: exit status 83 (46.474958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:18:55.192307   12676 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:55.192473   12676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.192476   12676 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:55.192478   12676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.192614   12676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:55.192817   12676 mustload.go:65] Loading cluster: functional-484000
	I1030 11:18:55.193015   12676 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:55.197848   12676 out.go:177] * The control-plane node functional-484000 host is not running: state=Stopped
	I1030 11:18:55.201877   12676 out.go:177]   To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n", want=*"context has been updated"*
I1030 11:18:55.880045   12043 retry.go:31] will retry after 23.107138636s: Temporary Error: Get "http:": http: no Host in request URL
I1030 11:19:18.989857   12043 retry.go:31] will retry after 32.635209653s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2: exit status 83 (46.653667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:18:55.145142   12674 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:55.145321   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.145324   12674 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:55.145326   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:55.145437   12674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:55.145661   12674 mustload.go:65] Loading cluster: functional-484000
	I1030 11:18:55.145873   12674 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:55.150919   12674 out.go:177] * The control-plane node functional-484000 host is not running: state=Stopped
	I1030 11:18:55.154769   12674 out.go:177]   To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-484000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-484000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-484000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1030 11:19:51.714844   12043 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036492291s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1030 11:20:16.857278   12043 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:20:26.859508   12043 retry.go:31] will retry after 3.29054768s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1030 11:20:40.154428   12043 retry.go:31] will retry after 3.154470768s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:50770->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-065000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-065000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.856904583s)

                                                
                                                
-- stdout --
	* [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-065000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:20:47.267655   12705 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:20:47.267850   12705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:20:47.267853   12705 out.go:358] Setting ErrFile to fd 2...
	I1030 11:20:47.267856   12705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:20:47.268003   12705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:20:47.269427   12705 out.go:352] Setting JSON to false
	I1030 11:20:47.287564   12705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6618,"bootTime":1730305829,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:20:47.287635   12705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:20:47.294074   12705 out.go:177] * [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:20:47.302081   12705 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:20:47.302137   12705 notify.go:220] Checking for updates...
	I1030 11:20:47.309957   12705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:20:47.313004   12705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:20:47.316061   12705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:20:47.317413   12705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:20:47.320980   12705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:20:47.324216   12705 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:20:47.325826   12705 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:20:47.333074   12705 start.go:297] selected driver: qemu2
	I1030 11:20:47.333082   12705 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:20:47.333090   12705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:20:47.335687   12705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:20:47.338984   12705 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:20:47.343069   12705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:20:47.343090   12705 cni.go:84] Creating CNI manager for ""
	I1030 11:20:47.343112   12705 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 11:20:47.343122   12705 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 11:20:47.343158   12705 start.go:340] cluster config:
	{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:20:47.347989   12705 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:20:47.355977   12705 out.go:177] * Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	I1030 11:20:47.360017   12705 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:20:47.360033   12705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:20:47.360043   12705 cache.go:56] Caching tarball of preloaded images
	I1030 11:20:47.360126   12705 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:20:47.360132   12705 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:20:47.360379   12705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/ha-065000/config.json ...
	I1030 11:20:47.360392   12705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/ha-065000/config.json: {Name:mkcb4c7a977f48a775b6b49154aa738c6c333f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:20:47.360735   12705 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:20:47.360786   12705 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "ha-065000"
	I1030 11:20:47.360800   12705 start.go:93] Provisioning new machine with config: &{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:20:47.360831   12705 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:20:47.365049   12705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:20:47.382544   12705 start.go:159] libmachine.API.Create for "ha-065000" (driver="qemu2")
	I1030 11:20:47.382575   12705 client.go:168] LocalClient.Create starting
	I1030 11:20:47.382641   12705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:20:47.382680   12705 main.go:141] libmachine: Decoding PEM data...
	I1030 11:20:47.382696   12705 main.go:141] libmachine: Parsing certificate...
	I1030 11:20:47.382736   12705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:20:47.382766   12705 main.go:141] libmachine: Decoding PEM data...
	I1030 11:20:47.382775   12705 main.go:141] libmachine: Parsing certificate...
	I1030 11:20:47.383302   12705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:20:47.540609   12705 main.go:141] libmachine: Creating SSH key...
	I1030 11:20:47.597831   12705 main.go:141] libmachine: Creating Disk image...
	I1030 11:20:47.597837   12705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:20:47.598010   12705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:47.607966   12705 main.go:141] libmachine: STDOUT: 
	I1030 11:20:47.607987   12705 main.go:141] libmachine: STDERR: 
	I1030 11:20:47.608047   12705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2 +20000M
	I1030 11:20:47.616762   12705 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:20:47.616786   12705 main.go:141] libmachine: STDERR: 
	I1030 11:20:47.616806   12705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:47.616811   12705 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:20:47.616822   12705 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:20:47.616849   12705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a1:27:ba:08:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:47.618709   12705 main.go:141] libmachine: STDOUT: 
	I1030 11:20:47.618729   12705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:20:47.618750   12705 client.go:171] duration metric: took 236.170583ms to LocalClient.Create
	I1030 11:20:49.620978   12705 start.go:128] duration metric: took 2.260154333s to createHost
	I1030 11:20:49.621073   12705 start.go:83] releasing machines lock for "ha-065000", held for 2.260269458s
	W1030 11:20:49.621142   12705 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:20:49.636586   12705 out.go:177] * Deleting "ha-065000" in qemu2 ...
	W1030 11:20:49.668859   12705 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:20:49.668891   12705 start.go:729] Will try again in 5 seconds ...
	I1030 11:20:54.670986   12705 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:20:54.671555   12705 start.go:364] duration metric: took 490.375µs to acquireMachinesLock for "ha-065000"
	I1030 11:20:54.671696   12705 start.go:93] Provisioning new machine with config: &{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:20:54.672023   12705 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:20:54.678692   12705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:20:54.726416   12705 start.go:159] libmachine.API.Create for "ha-065000" (driver="qemu2")
	I1030 11:20:54.726460   12705 client.go:168] LocalClient.Create starting
	I1030 11:20:54.726643   12705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:20:54.726737   12705 main.go:141] libmachine: Decoding PEM data...
	I1030 11:20:54.726753   12705 main.go:141] libmachine: Parsing certificate...
	I1030 11:20:54.726828   12705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:20:54.726888   12705 main.go:141] libmachine: Decoding PEM data...
	I1030 11:20:54.726902   12705 main.go:141] libmachine: Parsing certificate...
	I1030 11:20:54.728179   12705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:20:54.902539   12705 main.go:141] libmachine: Creating SSH key...
	I1030 11:20:55.016977   12705 main.go:141] libmachine: Creating Disk image...
	I1030 11:20:55.016983   12705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:20:55.017171   12705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:55.027401   12705 main.go:141] libmachine: STDOUT: 
	I1030 11:20:55.027419   12705 main.go:141] libmachine: STDERR: 
	I1030 11:20:55.027491   12705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2 +20000M
	I1030 11:20:55.036011   12705 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:20:55.036027   12705 main.go:141] libmachine: STDERR: 
	I1030 11:20:55.036036   12705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:55.036042   12705 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:20:55.036049   12705 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:20:55.036075   12705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:53:db:2a:e8:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:20:55.037931   12705 main.go:141] libmachine: STDOUT: 
	I1030 11:20:55.037945   12705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:20:55.037957   12705 client.go:171] duration metric: took 311.495042ms to LocalClient.Create
	I1030 11:20:57.040105   12705 start.go:128] duration metric: took 2.368084042s to createHost
	I1030 11:20:57.040206   12705 start.go:83] releasing machines lock for "ha-065000", held for 2.368617083s
	W1030 11:20:57.040594   12705 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:20:57.055221   12705 out.go:201] 
	W1030 11:20:57.060447   12705 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:20:57.060492   12705 out.go:270] * 
	* 
	W1030 11:20:57.063151   12705 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:20:57.076344   12705 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-065000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (71.906417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.154958ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-065000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- rollout status deployment/busybox: exit status 1 (63.057333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.815ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:20:57.355590   12043 retry.go:31] will retry after 538.754779ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.061625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:20:58.004746   12043 retry.go:31] will retry after 1.39878255s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.406291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:20:59.516227   12043 retry.go:31] will retry after 3.364333563s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.751459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:02.988901   12043 retry.go:31] will retry after 3.358082917s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.974875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:06.460224   12043 retry.go:31] will retry after 3.961596442s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.973917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:10.534134   12043 retry.go:31] will retry after 7.263312065s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.1585ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:17.911936   12043 retry.go:31] will retry after 8.937423552s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.626875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:26.964328   12043 retry.go:31] will retry after 17.15302065s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.5395ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:44.229002   12043 retry.go:31] will retry after 12.828074555s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.995833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:21:57.168424   12043 retry.go:31] will retry after 53.219625102s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.952042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.494292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.947708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.257875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.794416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.092833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-065000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.309917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-065000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.436458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-065000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-065000 -v=7 --alsologtostderr: exit status 83 (48.719541ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-065000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:50.919764   12792 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:50.920155   12792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:50.920159   12792 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:50.920161   12792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:50.920292   12792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:50.920491   12792 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:50.920724   12792 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:50.924726   12792 out.go:177] * The control-plane node ha-065000 host is not running: state=Stopped
	I1030 11:22:50.928701   12792 out.go:177]   To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-065000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.319042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-065000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-065000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.532834ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-065000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-065000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-065000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.90525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-065000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-065000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.312666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status --output json -v=7 --alsologtostderr: exit status 7 (34.518ms)

                                                
                                                
-- stdout --
	{"Name":"ha-065000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:51.153921   12804 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:51.154114   12804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.154117   12804 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:51.154120   12804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.154241   12804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:51.154369   12804 out.go:352] Setting JSON to true
	I1030 11:22:51.154379   12804 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:51.154433   12804 notify.go:220] Checking for updates...
	I1030 11:22:51.154597   12804 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:51.154607   12804 status.go:174] checking status of ha-065000 ...
	I1030 11:22:51.154854   12804 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:51.154858   12804 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:51.154860   12804 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-065000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.361833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.83175ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:51.224167   12808 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:51.224614   12808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.224618   12808 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:51.224621   12808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.224774   12808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:51.224999   12808 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:51.225209   12808 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:51.228997   12808 out.go:201] 
	W1030 11:22:51.232620   12808 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1030 11:22:51.232625   12808 out.go:270] * 
	* 
	W1030 11:22:51.234556   12808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:22:51.237742   12808 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-065000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (35.0375ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:51.275107   12810 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:51.275318   12810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.275321   12810 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:51.275323   12810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.275428   12810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:51.275543   12810 out.go:352] Setting JSON to false
	I1030 11:22:51.275554   12810 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:51.275613   12810 notify.go:220] Checking for updates...
	I1030 11:22:51.275759   12810 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:51.275769   12810 status.go:174] checking status of ha-065000 ...
	I1030 11:22:51.276024   12810 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:51.276027   12810 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:51.276029   12810 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.289792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-065000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.9505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 node start m02 -v=7 --alsologtostderr: exit status 85 (52.505625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:51.434222   12819 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:51.434636   12819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.434641   12819 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:51.434643   12819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.434818   12819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:51.435033   12819 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:51.435250   12819 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:51.438773   12819 out.go:201] 
	W1030 11:22:51.442735   12819 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1030 11:22:51.442740   12819 out.go:270] * 
	* 
	W1030 11:22:51.444620   12819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:22:51.448586   12819 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1030 11:22:51.434222   12819 out.go:345] Setting OutFile to fd 1 ...
I1030 11:22:51.434636   12819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:22:51.434641   12819 out.go:358] Setting ErrFile to fd 2...
I1030 11:22:51.434643   12819 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:22:51.434818   12819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:22:51.435033   12819 mustload.go:65] Loading cluster: ha-065000
I1030 11:22:51.435250   12819 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:22:51.438773   12819 out.go:201] 
W1030 11:22:51.442735   12819 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1030 11:22:51.442740   12819 out.go:270] * 
* 
W1030 11:22:51.444620   12819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1030 11:22:51.448586   12819 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-065000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (35.277958ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:51.486771   12821 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:51.486998   12821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.487004   12821 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:51.487006   12821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:51.487137   12821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:51.487258   12821 out.go:352] Setting JSON to false
	I1030 11:22:51.487269   12821 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:51.487317   12821 notify.go:220] Checking for updates...
	I1030 11:22:51.487479   12821 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:51.487488   12821 status.go:174] checking status of ha-065000 ...
	I1030 11:22:51.488086   12821 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:51.488097   12821 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:51.488100   12821 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:22:51.489065   12043 retry.go:31] will retry after 881.070376ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (82.238583ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:52.451334   12823 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:52.451601   12823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:52.451605   12823 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:52.451608   12823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:52.451781   12823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:52.451945   12823 out.go:352] Setting JSON to false
	I1030 11:22:52.451959   12823 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:52.452319   12823 notify.go:220] Checking for updates...
	I1030 11:22:52.453081   12823 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:52.453109   12823 status.go:174] checking status of ha-065000 ...
	I1030 11:22:52.453571   12823 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:52.453578   12823 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:52.453581   12823 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:22:52.454776   12043 retry.go:31] will retry after 2.202461715s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (80.287041ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:54.737874   12825 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:54.738091   12825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:54.738095   12825 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:54.738099   12825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:54.738248   12825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:54.738392   12825 out.go:352] Setting JSON to false
	I1030 11:22:54.738405   12825 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:54.738438   12825 notify.go:220] Checking for updates...
	I1030 11:22:54.738637   12825 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:54.738647   12825 status.go:174] checking status of ha-065000 ...
	I1030 11:22:54.738940   12825 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:54.738945   12825 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:54.738948   12825 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:22:54.739938   12043 retry.go:31] will retry after 2.957403301s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (80.79025ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:22:57.778228   12827 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:22:57.778459   12827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:57.778464   12827 out.go:358] Setting ErrFile to fd 2...
	I1030 11:22:57.778468   12827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:22:57.778641   12827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:22:57.778828   12827 out.go:352] Setting JSON to false
	I1030 11:22:57.778842   12827 mustload.go:65] Loading cluster: ha-065000
	I1030 11:22:57.778885   12827 notify.go:220] Checking for updates...
	I1030 11:22:57.779100   12827 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:22:57.779111   12827 status.go:174] checking status of ha-065000 ...
	I1030 11:22:57.779437   12827 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:22:57.779442   12827 status.go:384] host is not running, skipping remaining checks
	I1030 11:22:57.779445   12827 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:22:57.780584   12043 retry.go:31] will retry after 3.276374329s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (81.414292ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:01.138681   12829 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:01.138879   12829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:01.138884   12829 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:01.138887   12829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:01.139049   12829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:01.139196   12829 out.go:352] Setting JSON to false
	I1030 11:23:01.139210   12829 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:01.139265   12829 notify.go:220] Checking for updates...
	I1030 11:23:01.139481   12829 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:01.139491   12829 status.go:174] checking status of ha-065000 ...
	I1030 11:23:01.139783   12829 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:01.139787   12829 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:01.139790   12829 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:23:01.140828   12043 retry.go:31] will retry after 4.311282789s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (80.200416ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:05.532557   12831 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:05.532772   12831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:05.532776   12831 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:05.532779   12831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:05.532927   12831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:05.533080   12831 out.go:352] Setting JSON to false
	I1030 11:23:05.533093   12831 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:05.533122   12831 notify.go:220] Checking for updates...
	I1030 11:23:05.533373   12831 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:05.533383   12831 status.go:174] checking status of ha-065000 ...
	I1030 11:23:05.533671   12831 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:05.533676   12831 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:05.533678   12831 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:23:05.534689   12043 retry.go:31] will retry after 7.426950038s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (80.943125ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:13.042825   12835 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:13.043071   12835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:13.043075   12835 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:13.043078   12835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:13.043255   12835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:13.043406   12835 out.go:352] Setting JSON to false
	I1030 11:23:13.043420   12835 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:13.043455   12835 notify.go:220] Checking for updates...
	I1030 11:23:13.043724   12835 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:13.043735   12835 status.go:174] checking status of ha-065000 ...
	I1030 11:23:13.044039   12835 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:13.044043   12835 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:13.044046   12835 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:23:13.045057   12043 retry.go:31] will retry after 14.010880057s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (76.137083ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:27.135673   12837 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:27.135879   12837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:27.135883   12837 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:27.135886   12837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:27.136025   12837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:27.136169   12837 out.go:352] Setting JSON to false
	I1030 11:23:27.136182   12837 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:27.136219   12837 notify.go:220] Checking for updates...
	I1030 11:23:27.136423   12837 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:27.136433   12837 status.go:174] checking status of ha-065000 ...
	I1030 11:23:27.136722   12837 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:27.136727   12837 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:27.136729   12837 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:23:27.138389   12043 retry.go:31] will retry after 23.162655602s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (81.270958ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:50.382355   12839 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:50.382573   12839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:50.382577   12839 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:50.382580   12839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:50.382761   12839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:50.382916   12839 out.go:352] Setting JSON to false
	I1030 11:23:50.382929   12839 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:50.382976   12839 notify.go:220] Checking for updates...
	I1030 11:23:50.383171   12839 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:50.383180   12839 status.go:174] checking status of ha-065000 ...
	I1030 11:23:50.383468   12839 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:50.383472   12839 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:50.383475   12839 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (36.61ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-065000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-065000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.7575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-065000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-065000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-065000 -v=7 --alsologtostderr: (1.742496125s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-065000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-065000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231816042s)

                                                
                                                
-- stdout --
	* [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	* Restarting existing qemu2 VM for "ha-065000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-065000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:52.356991   12860 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:52.357169   12860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:52.357174   12860 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:52.357177   12860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:52.357341   12860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:52.358515   12860 out.go:352] Setting JSON to false
	I1030 11:23:52.378197   12860 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6803,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:23:52.378273   12860 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:23:52.382827   12860 out.go:177] * [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:23:52.390545   12860 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:23:52.390613   12860 notify.go:220] Checking for updates...
	I1030 11:23:52.397456   12860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:23:52.400491   12860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:23:52.403565   12860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:23:52.406512   12860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:23:52.409457   12860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:23:52.412729   12860 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:52.412778   12860 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:23:52.417405   12860 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:23:52.424528   12860 start.go:297] selected driver: qemu2
	I1030 11:23:52.424552   12860 start.go:901] validating driver "qemu2" against &{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:23:52.424639   12860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:23:52.427271   12860 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:23:52.427295   12860 cni.go:84] Creating CNI manager for ""
	I1030 11:23:52.427323   12860 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 11:23:52.427367   12860 start.go:340] cluster config:
	{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:23:52.431842   12860 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:23:52.439475   12860 out.go:177] * Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	I1030 11:23:52.442454   12860 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:23:52.442470   12860 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:23:52.442476   12860 cache.go:56] Caching tarball of preloaded images
	I1030 11:23:52.442552   12860 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:23:52.442558   12860 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:23:52.442617   12860 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/ha-065000/config.json ...
	I1030 11:23:52.443024   12860 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:23:52.443070   12860 start.go:364] duration metric: took 40.458µs to acquireMachinesLock for "ha-065000"
	I1030 11:23:52.443078   12860 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:23:52.443082   12860 fix.go:54] fixHost starting: 
	I1030 11:23:52.443198   12860 fix.go:112] recreateIfNeeded on ha-065000: state=Stopped err=<nil>
	W1030 11:23:52.443205   12860 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:23:52.451333   12860 out.go:177] * Restarting existing qemu2 VM for "ha-065000" ...
	I1030 11:23:52.455457   12860 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:23:52.455499   12860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:53:db:2a:e8:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:23:52.457755   12860 main.go:141] libmachine: STDOUT: 
	I1030 11:23:52.457775   12860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:23:52.457810   12860 fix.go:56] duration metric: took 14.726625ms for fixHost
	I1030 11:23:52.457815   12860 start.go:83] releasing machines lock for "ha-065000", held for 14.741458ms
	W1030 11:23:52.457821   12860 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:23:52.457857   12860 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:23:52.457861   12860 start.go:729] Will try again in 5 seconds ...
	I1030 11:23:57.459976   12860 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:23:57.460407   12860 start.go:364] duration metric: took 348µs to acquireMachinesLock for "ha-065000"
	I1030 11:23:57.460529   12860 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:23:57.460552   12860 fix.go:54] fixHost starting: 
	I1030 11:23:57.461194   12860 fix.go:112] recreateIfNeeded on ha-065000: state=Stopped err=<nil>
	W1030 11:23:57.461220   12860 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:23:57.465689   12860 out.go:177] * Restarting existing qemu2 VM for "ha-065000" ...
	I1030 11:23:57.473644   12860 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:23:57.473880   12860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:53:db:2a:e8:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:23:57.483700   12860 main.go:141] libmachine: STDOUT: 
	I1030 11:23:57.483747   12860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:23:57.483826   12860 fix.go:56] duration metric: took 23.276417ms for fixHost
	I1030 11:23:57.483850   12860 start.go:83] releasing machines lock for "ha-065000", held for 23.423708ms
	W1030 11:23:57.484010   12860 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:23:57.492589   12860 out.go:201] 
	W1030 11:23:57.496777   12860 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:23:57.496835   12860 out.go:270] * 
	* 
	W1030 11:23:57.499430   12860 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:23:57.506611   12860 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-065000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-065000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (36.426667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 node delete m03 -v=7 --alsologtostderr: exit status 83 (47.128084ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-065000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:57.668610   12872 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:57.669268   12872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:57.669272   12872 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:57.669275   12872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:57.669441   12872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:57.669663   12872 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:57.669869   12872 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:57.673614   12872 out.go:177] * The control-plane node ha-065000 host is not running: state=Stopped
	I1030 11:23:57.677613   12872 out.go:177]   To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-065000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (35.178583ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:23:57.715968   12874 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:23:57.716150   12874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:57.716153   12874 out.go:358] Setting ErrFile to fd 2...
	I1030 11:23:57.716156   12874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:23:57.716268   12874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:23:57.716384   12874 out.go:352] Setting JSON to false
	I1030 11:23:57.716395   12874 mustload.go:65] Loading cluster: ha-065000
	I1030 11:23:57.716447   12874 notify.go:220] Checking for updates...
	I1030 11:23:57.716566   12874 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:23:57.716576   12874 status.go:174] checking status of ha-065000 ...
	I1030 11:23:57.716847   12874 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:23:57.716851   12874 status.go:384] host is not running, skipping remaining checks
	I1030 11:23:57.716853   12874 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-065000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.65725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-065000 stop -v=7 --alsologtostderr: (3.248395041s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr: exit status 7 (71.218708ms)

                                                
                                                
-- stdout --
	ha-065000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:24:01.158373   12901 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:24:01.158603   12901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:01.158607   12901 out.go:358] Setting ErrFile to fd 2...
	I1030 11:24:01.158610   12901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:01.158751   12901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:24:01.158885   12901 out.go:352] Setting JSON to false
	I1030 11:24:01.158897   12901 mustload.go:65] Loading cluster: ha-065000
	I1030 11:24:01.158951   12901 notify.go:220] Checking for updates...
	I1030 11:24:01.159132   12901 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:24:01.159143   12901 status.go:174] checking status of ha-065000 ...
	I1030 11:24:01.159420   12901 status.go:371] ha-065000 host status = "Stopped" (err=<nil>)
	I1030 11:24:01.159424   12901 status.go:384] host is not running, skipping remaining checks
	I1030 11:24:01.159426   12901 status.go:176] ha-065000 status: &{Name:ha-065000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-065000 status -v=7 --alsologtostderr": ha-065000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (36.617792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-065000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-065000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.192381208s)

                                                
                                                
-- stdout --
	* [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	* Restarting existing qemu2 VM for "ha-065000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-065000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:24:01.230093   12905 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:24:01.230272   12905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:01.230275   12905 out.go:358] Setting ErrFile to fd 2...
	I1030 11:24:01.230278   12905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:01.230398   12905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:24:01.231463   12905 out.go:352] Setting JSON to false
	I1030 11:24:01.249047   12905 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6812,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:24:01.249119   12905 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:24:01.254406   12905 out.go:177] * [ha-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:24:01.262218   12905 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:24:01.262280   12905 notify.go:220] Checking for updates...
	I1030 11:24:01.269280   12905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:24:01.272239   12905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:24:01.276291   12905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:24:01.279367   12905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:24:01.282280   12905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:24:01.285624   12905 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:24:01.285888   12905 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:24:01.290309   12905 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:24:01.297275   12905 start.go:297] selected driver: qemu2
	I1030 11:24:01.297285   12905 start.go:901] validating driver "qemu2" against &{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:24:01.297360   12905 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:24:01.299851   12905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:24:01.299874   12905 cni.go:84] Creating CNI manager for ""
	I1030 11:24:01.299898   12905 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 11:24:01.299943   12905 start.go:340] cluster config:
	{Name:ha-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-065000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:24:01.304524   12905 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:24:01.313304   12905 out.go:177] * Starting "ha-065000" primary control-plane node in "ha-065000" cluster
	I1030 11:24:01.317264   12905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:24:01.317281   12905 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:24:01.317288   12905 cache.go:56] Caching tarball of preloaded images
	I1030 11:24:01.317350   12905 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:24:01.317357   12905 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:24:01.317415   12905 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/ha-065000/config.json ...
	I1030 11:24:01.317882   12905 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:24:01.317911   12905 start.go:364] duration metric: took 23.291µs to acquireMachinesLock for "ha-065000"
	I1030 11:24:01.317919   12905 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:24:01.317924   12905 fix.go:54] fixHost starting: 
	I1030 11:24:01.318046   12905 fix.go:112] recreateIfNeeded on ha-065000: state=Stopped err=<nil>
	W1030 11:24:01.318053   12905 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:24:01.325288   12905 out.go:177] * Restarting existing qemu2 VM for "ha-065000" ...
	I1030 11:24:01.329315   12905 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:24:01.329352   12905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:53:db:2a:e8:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:24:01.331570   12905 main.go:141] libmachine: STDOUT: 
	I1030 11:24:01.331587   12905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:24:01.331620   12905 fix.go:56] duration metric: took 13.694791ms for fixHost
	I1030 11:24:01.331624   12905 start.go:83] releasing machines lock for "ha-065000", held for 13.70925ms
	W1030 11:24:01.331630   12905 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:24:01.331664   12905 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:24:01.331669   12905 start.go:729] Will try again in 5 seconds ...
	I1030 11:24:06.333790   12905 start.go:360] acquireMachinesLock for ha-065000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:24:06.334228   12905 start.go:364] duration metric: took 338.5µs to acquireMachinesLock for "ha-065000"
	I1030 11:24:06.334345   12905 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:24:06.334365   12905 fix.go:54] fixHost starting: 
	I1030 11:24:06.335035   12905 fix.go:112] recreateIfNeeded on ha-065000: state=Stopped err=<nil>
	W1030 11:24:06.335061   12905 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:24:06.339524   12905 out.go:177] * Restarting existing qemu2 VM for "ha-065000" ...
	I1030 11:24:06.346401   12905 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:24:06.346724   12905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:53:db:2a:e8:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/ha-065000/disk.qcow2
	I1030 11:24:06.356326   12905 main.go:141] libmachine: STDOUT: 
	I1030 11:24:06.356378   12905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:24:06.356435   12905 fix.go:56] duration metric: took 22.071375ms for fixHost
	I1030 11:24:06.356450   12905 start.go:83] releasing machines lock for "ha-065000", held for 22.198667ms
	W1030 11:24:06.356610   12905 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-065000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:24:06.363369   12905 out.go:201] 
	W1030 11:24:06.367503   12905 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:24:06.367532   12905 out.go:270] * 
	* 
	W1030 11:24:06.370040   12905 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:24:06.377472   12905 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-065000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (77.292584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-065000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.709167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-065000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-065000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.717583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-065000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:24:06.590776   12920 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:24:06.590979   12920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:06.590982   12920 out.go:358] Setting ErrFile to fd 2...
	I1030 11:24:06.590984   12920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:06.591128   12920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:24:06.591391   12920 mustload.go:65] Loading cluster: ha-065000
	I1030 11:24:06.591609   12920 config.go:182] Loaded profile config "ha-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:24:06.595487   12920 out.go:177] * The control-plane node ha-065000 host is not running: state=Stopped
	I1030 11:24:06.599389   12920 out.go:177]   To start a cluster, run: "minikube start -p ha-065000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-065000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (34.585916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-065000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-065000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-065000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-065000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-065000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-065000 -n ha-065000: exit status 7 (35.237083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-271000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-271000 --driver=qemu2 : exit status 80 (9.863138s)

                                                
                                                
-- stdout --
	* [image-271000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-271000" primary control-plane node in "image-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-271000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-271000 -n image-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-271000 -n image-271000: exit status 7 (73.673833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.901677125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a2bc9a9-79fc-48bb-acb1-4f2db9c733b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-638000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a7e3328-f43c-4234-8cff-6f4d0e2a0220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19883"}}
	{"specversion":"1.0","id":"2410547f-7141-4b81-b1cf-2194e525f3b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig"}}
	{"specversion":"1.0","id":"12a9feb6-09ff-4180-95ee-14663411b220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a80655bc-cc11-44c4-9090-5fc271ba65c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5124ef32-2a15-47db-8ca0-9c9ea3c60350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube"}}
	{"specversion":"1.0","id":"30bb450d-63a7-41fe-b4d9-842216dc6439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f8ccea4e-f50b-4571-a816-31e690065b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4404cf35-a4c8-4d15-a8a9-14d7aa53dda1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"8335c05c-c077-4534-bba3-03cb21cf330a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-638000\" primary control-plane node in \"json-output-638000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"25aa6f1d-6abd-437e-89db-29317faa917d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f76320e7-8a05-468e-941d-763ab563c644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-638000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fff0a5a-fd9f-4694-94bb-dd3964384bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"379c86b3-94b2-4a77-8c60-6cc80b06dc4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8dc0f1fe-bac2-4b8f-a4d2-e86b805cb522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-638000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ed4eec02-9da6-4eae-9805-7456e0f7ba11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"20ce2ad5-e6a8-43d2-b1e8-bfb34bf829ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-638000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.90s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser: exit status 83 (85.345375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48e05b7b-10dc-4126-90c0-48175879bb1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-638000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"7db8e217-7aa1-4ec3-9697-84f550c23c0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-638000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-638000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser: exit status 83 (50.307667ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-638000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-638000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-638000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-066000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-066000 --driver=qemu2 : exit status 80 (9.793659708s)

                                                
                                                
-- stdout --
	* [first-066000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-066000" primary control-plane node in "first-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-066000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-30 11:24:41.107322 -0700 PDT m=+480.288327710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-067000 -n second-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-067000 -n second-067000: exit status 85 (84.345125ms)

                                                
                                                
-- stdout --
	* Profile "second-067000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-067000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-067000" host is not running, skipping log retrieval (state="* Profile \"second-067000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-067000\"")
helpers_test.go:175: Cleaning up "second-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-067000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-30 11:24:41.307203 -0700 PDT m=+480.488210876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-066000 -n first-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-066000 -n first-066000: exit status 7 (34.570416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-066000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-364000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-364000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.042489875s)

                                                
                                                
-- stdout --
	* [mount-start-1-364000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-364000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-364000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-364000 -n mount-start-1-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-364000 -n mount-start-1-364000: exit status 7 (76.482375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-097000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-097000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.914862s)

                                                
                                                
-- stdout --
	* [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-097000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:24:51.770569   13063 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:24:51.770732   13063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:51.770735   13063 out.go:358] Setting ErrFile to fd 2...
	I1030 11:24:51.770738   13063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:24:51.770866   13063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:24:51.772001   13063 out.go:352] Setting JSON to false
	I1030 11:24:51.789678   13063 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6862,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:24:51.789748   13063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:24:51.795191   13063 out.go:177] * [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:24:51.803089   13063 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:24:51.803155   13063 notify.go:220] Checking for updates...
	I1030 11:24:51.811058   13063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:24:51.814068   13063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:24:51.817143   13063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:24:51.820072   13063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:24:51.823064   13063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:24:51.826283   13063 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:24:51.830042   13063 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:24:51.837098   13063 start.go:297] selected driver: qemu2
	I1030 11:24:51.837103   13063 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:24:51.837112   13063 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:24:51.839617   13063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:24:51.844071   13063 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:24:51.847176   13063 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:24:51.847198   13063 cni.go:84] Creating CNI manager for ""
	I1030 11:24:51.847226   13063 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1030 11:24:51.847231   13063 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 11:24:51.847286   13063 start.go:340] cluster config:
	{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:24:51.851876   13063 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:24:51.860066   13063 out.go:177] * Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	I1030 11:24:51.864031   13063 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:24:51.864050   13063 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:24:51.864059   13063 cache.go:56] Caching tarball of preloaded images
	I1030 11:24:51.864145   13063 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:24:51.864151   13063 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:24:51.864365   13063 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/multinode-097000/config.json ...
	I1030 11:24:51.864378   13063 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/multinode-097000/config.json: {Name:mkffad802eb39f49fb7c90e48efb69bd84451c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:24:51.864778   13063 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:24:51.864829   13063 start.go:364] duration metric: took 45.125µs to acquireMachinesLock for "multinode-097000"
	I1030 11:24:51.864842   13063 start.go:93] Provisioning new machine with config: &{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:24:51.864875   13063 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:24:51.873088   13063 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:24:51.891789   13063 start.go:159] libmachine.API.Create for "multinode-097000" (driver="qemu2")
	I1030 11:24:51.891824   13063 client.go:168] LocalClient.Create starting
	I1030 11:24:51.891902   13063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:24:51.891942   13063 main.go:141] libmachine: Decoding PEM data...
	I1030 11:24:51.891956   13063 main.go:141] libmachine: Parsing certificate...
	I1030 11:24:51.891997   13063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:24:51.892027   13063 main.go:141] libmachine: Decoding PEM data...
	I1030 11:24:51.892040   13063 main.go:141] libmachine: Parsing certificate...
	I1030 11:24:51.892495   13063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:24:52.051934   13063 main.go:141] libmachine: Creating SSH key...
	I1030 11:24:52.187488   13063 main.go:141] libmachine: Creating Disk image...
	I1030 11:24:52.187495   13063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:24:52.187700   13063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:52.197998   13063 main.go:141] libmachine: STDOUT: 
	I1030 11:24:52.198015   13063 main.go:141] libmachine: STDERR: 
	I1030 11:24:52.198064   13063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2 +20000M
	I1030 11:24:52.206629   13063 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:24:52.206643   13063 main.go:141] libmachine: STDERR: 
	I1030 11:24:52.206654   13063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:52.206668   13063 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:24:52.206682   13063 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:24:52.206709   13063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:66:6e:d3:b4:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:52.208547   13063 main.go:141] libmachine: STDOUT: 
	I1030 11:24:52.208563   13063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:24:52.208589   13063 client.go:171] duration metric: took 316.756ms to LocalClient.Create
	I1030 11:24:54.210763   13063 start.go:128] duration metric: took 2.345894583s to createHost
	I1030 11:24:54.210869   13063 start.go:83] releasing machines lock for "multinode-097000", held for 2.34605675s
	W1030 11:24:54.210943   13063 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:24:54.226191   13063 out.go:177] * Deleting "multinode-097000" in qemu2 ...
	W1030 11:24:54.253714   13063 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:24:54.253797   13063 start.go:729] Will try again in 5 seconds ...
	I1030 11:24:59.255901   13063 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:24:59.256544   13063 start.go:364] duration metric: took 547µs to acquireMachinesLock for "multinode-097000"
	I1030 11:24:59.256664   13063 start.go:93] Provisioning new machine with config: &{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:24:59.256969   13063 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:24:59.272875   13063 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:24:59.322038   13063 start.go:159] libmachine.API.Create for "multinode-097000" (driver="qemu2")
	I1030 11:24:59.322090   13063 client.go:168] LocalClient.Create starting
	I1030 11:24:59.322224   13063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:24:59.322309   13063 main.go:141] libmachine: Decoding PEM data...
	I1030 11:24:59.322327   13063 main.go:141] libmachine: Parsing certificate...
	I1030 11:24:59.322405   13063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:24:59.322467   13063 main.go:141] libmachine: Decoding PEM data...
	I1030 11:24:59.322480   13063 main.go:141] libmachine: Parsing certificate...
	I1030 11:24:59.323022   13063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:24:59.492719   13063 main.go:141] libmachine: Creating SSH key...
	I1030 11:24:59.583429   13063 main.go:141] libmachine: Creating Disk image...
	I1030 11:24:59.583435   13063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:24:59.583617   13063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:59.593451   13063 main.go:141] libmachine: STDOUT: 
	I1030 11:24:59.593476   13063 main.go:141] libmachine: STDERR: 
	I1030 11:24:59.593534   13063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2 +20000M
	I1030 11:24:59.601955   13063 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:24:59.601970   13063 main.go:141] libmachine: STDERR: 
	I1030 11:24:59.601989   13063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:59.601994   13063 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:24:59.602001   13063 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:24:59.602035   13063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:66:a6:22:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:24:59.603882   13063 main.go:141] libmachine: STDOUT: 
	I1030 11:24:59.603901   13063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:24:59.603914   13063 client.go:171] duration metric: took 281.821709ms to LocalClient.Create
	I1030 11:25:01.606063   13063 start.go:128] duration metric: took 2.34906975s to createHost
	I1030 11:25:01.606170   13063 start.go:83] releasing machines lock for "multinode-097000", held for 2.349626875s
	W1030 11:25:01.606529   13063 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:25:01.623149   13063 out.go:201] 
	W1030 11:25:01.627302   13063 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:25:01.627327   13063 out.go:270] * 
	* 
	W1030 11:25:01.629925   13063 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:25:01.637172   13063 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-097000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (73.453875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (89.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (64.375333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-097000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- rollout status deployment/busybox: exit status 1 (63.165542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.751709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:01.918278   12043 retry.go:31] will retry after 1.48894294s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.528667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:03.519099   12043 retry.go:31] will retry after 1.530229328s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.90475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:05.162535   12043 retry.go:31] will retry after 1.633519428s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.222625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:06.907595   12043 retry.go:31] will retry after 4.833900873s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.575792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:11.851370   12043 retry.go:31] will retry after 6.481108574s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.295291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:18.444205   12043 retry.go:31] will retry after 4.603735695s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.597417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:23.159833   12043 retry.go:31] will retry after 7.801556443s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.629708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:31.073280   12043 retry.go:31] will retry after 21.547118287s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.067458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1030 11:25:52.730863   12043 retry.go:31] will retry after 37.676661463s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.711375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (63.258125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.io: exit status 1 (63.108084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.205459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.252ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.230375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (89.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-097000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.513375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (34.945333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-097000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-097000 -v 3 --alsologtostderr: exit status 83 (49.347917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-097000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-097000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:30.936404   13140 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:30.936593   13140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:30.936596   13140 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:30.936599   13140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:30.936732   13140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:30.936952   13140 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:30.937160   13140 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:30.942270   13140 out.go:177] * The control-plane node multinode-097000 host is not running: state=Stopped
	I1030 11:26:30.946121   13140 out.go:177]   To start a cluster, run: "minikube start -p multinode-097000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-097000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.157375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-097000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-097000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.292291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-097000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-097000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-097000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.332125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-097000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-097000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-097000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-097000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (34.856416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status --output json --alsologtostderr: exit status 7 (34.618458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-097000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:31.171380   13152 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:31.171537   13152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.171541   13152 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:31.171543   13152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.171672   13152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:31.171794   13152 out.go:352] Setting JSON to true
	I1030 11:26:31.171805   13152 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:31.171884   13152 notify.go:220] Checking for updates...
	I1030 11:26:31.172047   13152 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:31.172056   13152 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:31.172317   13152 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:31.172321   13152 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:31.172323   13152 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-097000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.447833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 node stop m03: exit status 85 (52.766875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-097000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status: exit status 7 (35.025167ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr: exit status 7 (35.614625ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:31.331199   13160 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:31.331385   13160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.331390   13160 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:31.331393   13160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.331509   13160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:31.331645   13160 out.go:352] Setting JSON to false
	I1030 11:26:31.331657   13160 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:31.331696   13160 notify.go:220] Checking for updates...
	I1030 11:26:31.331854   13160 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:31.331863   13160 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:31.332098   13160 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:31.332102   13160 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:31.332104   13160 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr": multinode-097000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 node start m03 -v=7 --alsologtostderr: exit status 85 (53.523667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:31.401841   13164 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:31.402266   13164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.402271   13164 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:31.402273   13164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.402466   13164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:31.402701   13164 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:31.402912   13164 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:31.407303   13164 out.go:201] 
	W1030 11:26:31.411294   13164 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1030 11:26:31.411298   13164 out.go:270] * 
	* 
	W1030 11:26:31.413103   13164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:26:31.417300   13164 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1030 11:26:31.401841   13164 out.go:345] Setting OutFile to fd 1 ...
I1030 11:26:31.402266   13164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:26:31.402271   13164 out.go:358] Setting ErrFile to fd 2...
I1030 11:26:31.402273   13164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1030 11:26:31.402466   13164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
I1030 11:26:31.402701   13164 mustload.go:65] Loading cluster: multinode-097000
I1030 11:26:31.402912   13164 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1030 11:26:31.407303   13164 out.go:201] 
W1030 11:26:31.411294   13164 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1030 11:26:31.411298   13164 out.go:270] * 
* 
W1030 11:26:31.413103   13164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1030 11:26:31.417300   13164 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-097000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (35.993ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:31.456545   13166 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:31.456715   13166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.456719   13166 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:31.456721   13166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:31.456874   13166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:31.456997   13166 out.go:352] Setting JSON to false
	I1030 11:26:31.457014   13166 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:31.457063   13166 notify.go:220] Checking for updates...
	I1030 11:26:31.457251   13166 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:31.457260   13166 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:31.457514   13166 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:31.457517   13166 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:31.457520   13166 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:31.458422   12043 retry.go:31] will retry after 1.415995844s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (78.215916ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:32.952739   13168 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:32.952944   13168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:32.952948   13168 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:32.952952   13168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:32.953103   13168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:32.953248   13168 out.go:352] Setting JSON to false
	I1030 11:26:32.953263   13168 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:32.953305   13168 notify.go:220] Checking for updates...
	I1030 11:26:32.953511   13168 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:32.953522   13168 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:32.953879   13168 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:32.953884   13168 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:32.953886   13168 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:32.954992   12043 retry.go:31] will retry after 1.533432729s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (80.7955ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:34.568214   13170 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:34.568440   13170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:34.568444   13170 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:34.568447   13170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:34.568609   13170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:34.568753   13170 out.go:352] Setting JSON to false
	I1030 11:26:34.568770   13170 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:34.569084   13170 notify.go:220] Checking for updates...
	I1030 11:26:34.569835   13170 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:34.569850   13170 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:34.570356   13170 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:34.570363   13170 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:34.570366   13170 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:34.571585   12043 retry.go:31] will retry after 2.568980789s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (79.600584ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:37.220281   13172 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:37.220504   13172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:37.220511   13172 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:37.220514   13172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:37.220698   13172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:37.220857   13172 out.go:352] Setting JSON to false
	I1030 11:26:37.220872   13172 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:37.220906   13172 notify.go:220] Checking for updates...
	I1030 11:26:37.221141   13172 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:37.221151   13172 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:37.221452   13172 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:37.221456   13172 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:37.221459   13172 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:37.222483   12043 retry.go:31] will retry after 2.882492306s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (80.757166ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:40.185804   13174 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:40.186004   13174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:40.186008   13174 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:40.186014   13174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:40.186172   13174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:40.186323   13174 out.go:352] Setting JSON to false
	I1030 11:26:40.186337   13174 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:40.186378   13174 notify.go:220] Checking for updates...
	I1030 11:26:40.186611   13174 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:40.186621   13174 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:40.186910   13174 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:40.186914   13174 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:40.186917   13174 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:40.188063   12043 retry.go:31] will retry after 2.975788963s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (80.299125ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:43.244340   13176 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:43.244552   13176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:43.244556   13176 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:43.244559   13176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:43.244725   13176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:43.244888   13176 out.go:352] Setting JSON to false
	I1030 11:26:43.244901   13176 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:43.244935   13176 notify.go:220] Checking for updates...
	I1030 11:26:43.245144   13176 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:43.245154   13176 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:43.245429   13176 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:43.245433   13176 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:43.245436   13176 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:43.246495   12043 retry.go:31] will retry after 11.324058833s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (80.195333ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:26:54.650719   13181 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:26:54.650945   13181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:54.650950   13181 out.go:358] Setting ErrFile to fd 2...
	I1030 11:26:54.650953   13181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:26:54.651123   13181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:26:54.651273   13181 out.go:352] Setting JSON to false
	I1030 11:26:54.651286   13181 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:26:54.651317   13181 notify.go:220] Checking for updates...
	I1030 11:26:54.651549   13181 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:26:54.651559   13181 status.go:174] checking status of multinode-097000 ...
	I1030 11:26:54.651854   13181 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:26:54.651859   13181 status.go:384] host is not running, skipping remaining checks
	I1030 11:26:54.651861   13181 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:26:54.653051   12043 retry.go:31] will retry after 13.020513421s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (81.6585ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:07.755263   13183 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:07.755489   13183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:07.755493   13183 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:07.755496   13183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:07.755653   13183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:07.755833   13183 out.go:352] Setting JSON to false
	I1030 11:27:07.755846   13183 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:27:07.755892   13183 notify.go:220] Checking for updates...
	I1030 11:27:07.756122   13183 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:07.756132   13183 status.go:174] checking status of multinode-097000 ...
	I1030 11:27:07.756464   13183 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:27:07.756468   13183 status.go:384] host is not running, skipping remaining checks
	I1030 11:27:07.756470   13183 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1030 11:27:07.757484   12043 retry.go:31] will retry after 14.854330219s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr: exit status 7 (80.452084ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:22.692443   13185 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:22.692655   13185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:22.692663   13185 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:22.692666   13185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:22.692837   13185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:22.692970   13185 out.go:352] Setting JSON to false
	I1030 11:27:22.692984   13185 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:27:22.693036   13185 notify.go:220] Checking for updates...
	I1030 11:27:22.693223   13185 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:22.693233   13185 status.go:174] checking status of multinode-097000 ...
	I1030 11:27:22.693536   13185 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:27:22.693540   13185 status.go:384] host is not running, skipping remaining checks
	I1030 11:27:22.693543   13185 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-097000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (36.948458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-097000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-097000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-097000: (2.849783542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-097000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-097000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231978292s)

                                                
                                                
-- stdout --
	* [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	* Restarting existing qemu2 VM for "multinode-097000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-097000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:25.687257   13209 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:25.687469   13209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:25.687473   13209 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:25.687476   13209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:25.687632   13209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:25.688866   13209 out.go:352] Setting JSON to false
	I1030 11:27:25.708688   13209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7016,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:27:25.708758   13209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:27:25.713883   13209 out.go:177] * [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:27:25.721888   13209 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:27:25.721963   13209 notify.go:220] Checking for updates...
	I1030 11:27:25.728854   13209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:27:25.731822   13209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:27:25.734794   13209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:27:25.737878   13209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:27:25.740845   13209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:27:25.744076   13209 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:25.744124   13209 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:27:25.747794   13209 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:27:25.754766   13209 start.go:297] selected driver: qemu2
	I1030 11:27:25.754772   13209 start.go:901] validating driver "qemu2" against &{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:27:25.754820   13209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:27:25.757354   13209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:27:25.757379   13209 cni.go:84] Creating CNI manager for ""
	I1030 11:27:25.757407   13209 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 11:27:25.757452   13209 start.go:340] cluster config:
	{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:27:25.761934   13209 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:25.768788   13209 out.go:177] * Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	I1030 11:27:25.772818   13209 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:27:25.772835   13209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:27:25.772843   13209 cache.go:56] Caching tarball of preloaded images
	I1030 11:27:25.772926   13209 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:27:25.772932   13209 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:27:25.772982   13209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/multinode-097000/config.json ...
	I1030 11:27:25.773408   13209 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:27:25.773470   13209 start.go:364] duration metric: took 49.875µs to acquireMachinesLock for "multinode-097000"
	I1030 11:27:25.773480   13209 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:27:25.773484   13209 fix.go:54] fixHost starting: 
	I1030 11:27:25.773605   13209 fix.go:112] recreateIfNeeded on multinode-097000: state=Stopped err=<nil>
	W1030 11:27:25.773612   13209 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:27:25.781823   13209 out.go:177] * Restarting existing qemu2 VM for "multinode-097000" ...
	I1030 11:27:25.785810   13209 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:27:25.785850   13209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:66:a6:22:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:27:25.788151   13209 main.go:141] libmachine: STDOUT: 
	I1030 11:27:25.788172   13209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:27:25.788202   13209 fix.go:56] duration metric: took 14.717291ms for fixHost
	I1030 11:27:25.788207   13209 start.go:83] releasing machines lock for "multinode-097000", held for 14.731958ms
	W1030 11:27:25.788213   13209 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:27:25.788256   13209 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:27:25.788261   13209 start.go:729] Will try again in 5 seconds ...
	I1030 11:27:30.790341   13209 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:27:30.790679   13209 start.go:364] duration metric: took 280µs to acquireMachinesLock for "multinode-097000"
	I1030 11:27:30.790814   13209 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:27:30.790832   13209 fix.go:54] fixHost starting: 
	I1030 11:27:30.791483   13209 fix.go:112] recreateIfNeeded on multinode-097000: state=Stopped err=<nil>
	W1030 11:27:30.791509   13209 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:27:30.795920   13209 out.go:177] * Restarting existing qemu2 VM for "multinode-097000" ...
	I1030 11:27:30.804052   13209 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:27:30.804228   13209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:66:a6:22:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:27:30.813807   13209 main.go:141] libmachine: STDOUT: 
	I1030 11:27:30.813870   13209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:27:30.813949   13209 fix.go:56] duration metric: took 23.1005ms for fixHost
	I1030 11:27:30.813967   13209 start.go:83] releasing machines lock for "multinode-097000", held for 23.264292ms
	W1030 11:27:30.814118   13209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:27:30.820974   13209 out.go:201] 
	W1030 11:27:30.825043   13209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:27:30.825065   13209 out.go:270] * 
	* 
	W1030 11:27:30.827526   13209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:27:30.835945   13209 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-097000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-097000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (36.354042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 node delete m03: exit status 83 (47.059833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-097000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-097000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-097000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr: exit status 7 (34.995667ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:31.041110   13223 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:31.041296   13223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:31.041300   13223 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:31.041302   13223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:31.041426   13223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:31.041568   13223 out.go:352] Setting JSON to false
	I1030 11:27:31.041580   13223 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:27:31.041644   13223 notify.go:220] Checking for updates...
	I1030 11:27:31.041782   13223 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:31.041792   13223 status.go:174] checking status of multinode-097000 ...
	I1030 11:27:31.042027   13223 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:27:31.042030   13223 status.go:384] host is not running, skipping remaining checks
	I1030 11:27:31.042033   13223 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.202958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-097000 stop: (2.024927792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status: exit status 7 (73.009541ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr: exit status 7 (36.733708ms)

                                                
                                                
-- stdout --
	multinode-097000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:33.211680   13239 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:33.211861   13239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:33.211865   13239 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:33.211867   13239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:33.212011   13239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:33.212124   13239 out.go:352] Setting JSON to false
	I1030 11:27:33.212135   13239 mustload.go:65] Loading cluster: multinode-097000
	I1030 11:27:33.212201   13239 notify.go:220] Checking for updates...
	I1030 11:27:33.212346   13239 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:33.212354   13239 status.go:174] checking status of multinode-097000 ...
	I1030 11:27:33.212611   13239 status.go:371] multinode-097000 host status = "Stopped" (err=<nil>)
	I1030 11:27:33.212615   13239 status.go:384] host is not running, skipping remaining checks
	I1030 11:27:33.212617   13239 status.go:176] multinode-097000 status: &{Name:multinode-097000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr": multinode-097000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-097000 status --alsologtostderr": multinode-097000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.24825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-097000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-097000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192247542s)

                                                
                                                
-- stdout --
	* [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	* Restarting existing qemu2 VM for "multinode-097000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-097000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:33.281822   13243 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:33.281981   13243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:33.281984   13243 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:33.281987   13243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:33.282130   13243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:33.283137   13243 out.go:352] Setting JSON to false
	I1030 11:27:33.300880   13243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7024,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:27:33.300996   13243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:27:33.305693   13243 out.go:177] * [multinode-097000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:27:33.312632   13243 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:27:33.312668   13243 notify.go:220] Checking for updates...
	I1030 11:27:33.320643   13243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:27:33.323626   13243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:27:33.326624   13243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:27:33.329599   13243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:27:33.332533   13243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:27:33.335907   13243 config.go:182] Loaded profile config "multinode-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:33.336174   13243 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:27:33.339594   13243 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:27:33.346619   13243 start.go:297] selected driver: qemu2
	I1030 11:27:33.346627   13243 start.go:901] validating driver "qemu2" against &{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:27:33.346692   13243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:27:33.349377   13243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:27:33.349400   13243 cni.go:84] Creating CNI manager for ""
	I1030 11:27:33.349420   13243 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1030 11:27:33.349469   13243 start.go:340] cluster config:
	{Name:multinode-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-097000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:27:33.353818   13243 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:33.361580   13243 out.go:177] * Starting "multinode-097000" primary control-plane node in "multinode-097000" cluster
	I1030 11:27:33.365599   13243 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:27:33.365615   13243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:27:33.365624   13243 cache.go:56] Caching tarball of preloaded images
	I1030 11:27:33.365680   13243 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:27:33.365686   13243 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:27:33.365748   13243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/multinode-097000/config.json ...
	I1030 11:27:33.366187   13243 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:27:33.366223   13243 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "multinode-097000"
	I1030 11:27:33.366232   13243 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:27:33.366235   13243 fix.go:54] fixHost starting: 
	I1030 11:27:33.366357   13243 fix.go:112] recreateIfNeeded on multinode-097000: state=Stopped err=<nil>
	W1030 11:27:33.366365   13243 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:27:33.374623   13243 out.go:177] * Restarting existing qemu2 VM for "multinode-097000" ...
	I1030 11:27:33.378419   13243 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:27:33.378452   13243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:66:a6:22:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:27:33.380694   13243 main.go:141] libmachine: STDOUT: 
	I1030 11:27:33.380713   13243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:27:33.380742   13243 fix.go:56] duration metric: took 14.505041ms for fixHost
	I1030 11:27:33.380747   13243 start.go:83] releasing machines lock for "multinode-097000", held for 14.519417ms
	W1030 11:27:33.380752   13243 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:27:33.380792   13243 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:27:33.380796   13243 start.go:729] Will try again in 5 seconds ...
	I1030 11:27:38.382921   13243 start.go:360] acquireMachinesLock for multinode-097000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:27:38.383351   13243 start.go:364] duration metric: took 331.5µs to acquireMachinesLock for "multinode-097000"
	I1030 11:27:38.383471   13243 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:27:38.383487   13243 fix.go:54] fixHost starting: 
	I1030 11:27:38.384164   13243 fix.go:112] recreateIfNeeded on multinode-097000: state=Stopped err=<nil>
	W1030 11:27:38.384191   13243 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:27:38.392588   13243 out.go:177] * Restarting existing qemu2 VM for "multinode-097000" ...
	I1030 11:27:38.396640   13243 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:27:38.397001   13243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:66:a6:22:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/multinode-097000/disk.qcow2
	I1030 11:27:38.406698   13243 main.go:141] libmachine: STDOUT: 
	I1030 11:27:38.406750   13243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:27:38.406806   13243 fix.go:56] duration metric: took 23.320084ms for fixHost
	I1030 11:27:38.406823   13243 start.go:83] releasing machines lock for "multinode-097000", held for 23.450291ms
	W1030 11:27:38.407025   13243 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-097000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:27:38.414573   13243 out.go:201] 
	W1030 11:27:38.418626   13243 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:27:38.418649   13243 out.go:270] * 
	* 
	W1030 11:27:38.421103   13243 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:27:38.428619   13243 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-097000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (74.198042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-097000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-097000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-097000-m01 --driver=qemu2 : exit status 80 (10.039129333s)

                                                
                                                
-- stdout --
	* [multinode-097000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-097000-m01" primary control-plane node in "multinode-097000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-097000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-097000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-097000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-097000-m02 --driver=qemu2 : exit status 80 (10.006798208s)

                                                
                                                
-- stdout --
	* [multinode-097000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-097000-m02" primary control-plane node in "multinode-097000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-097000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-097000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-097000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-097000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-097000: exit status 83 (89.028625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-097000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-097000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-097000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-097000 -n multinode-097000: exit status 7 (35.912917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-097000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.29s)

                                                
                                    
x
+
TestPreload (10.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.964460542s)

                                                
                                                
-- stdout --
	* [test-preload-373000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-373000" primary control-plane node in "test-preload-373000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-373000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:27:58.958963   13296 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:27:58.959124   13296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:58.959127   13296 out.go:358] Setting ErrFile to fd 2...
	I1030 11:27:58.959129   13296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:27:58.959246   13296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:27:58.960369   13296 out.go:352] Setting JSON to false
	I1030 11:27:58.977912   13296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7049,"bootTime":1730305829,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:27:58.977983   13296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:27:58.984339   13296 out.go:177] * [test-preload-373000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:27:58.993267   13296 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:27:58.993313   13296 notify.go:220] Checking for updates...
	I1030 11:27:59.002224   13296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:27:59.005269   13296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:27:59.009231   13296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:27:59.012231   13296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:27:59.015303   13296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:27:59.018725   13296 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:27:59.018783   13296 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:27:59.023252   13296 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:27:59.030198   13296 start.go:297] selected driver: qemu2
	I1030 11:27:59.030205   13296 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:27:59.030215   13296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:27:59.032726   13296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:27:59.037294   13296 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:27:59.040290   13296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:27:59.040311   13296 cni.go:84] Creating CNI manager for ""
	I1030 11:27:59.040335   13296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:27:59.040340   13296 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:27:59.040378   13296 start.go:340] cluster config:
	{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:27:59.045487   13296 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.054307   13296 out.go:177] * Starting "test-preload-373000" primary control-plane node in "test-preload-373000" cluster
	I1030 11:27:59.058231   13296 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1030 11:27:59.058334   13296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/test-preload-373000/config.json ...
	I1030 11:27:59.058333   13296 cache.go:107] acquiring lock: {Name:mk8a5292f0c3a9e85954488a493b1ccaf907c974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058338   13296 cache.go:107] acquiring lock: {Name:mkc5d712b68cf3069ed4c41a06c5ea273054071d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058351   13296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/test-preload-373000/config.json: {Name:mkba3128ce2d00d5e76740523237482c33e056c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:27:59.058368   13296 cache.go:107] acquiring lock: {Name:mk9036380e61c928063617a2e94487d88a3bb066 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058326   13296 cache.go:107] acquiring lock: {Name:mka69c19de02f0de155a3ee65c19cab0fdf62d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058347   13296 cache.go:107] acquiring lock: {Name:mkc6204975362192d028a67dc40c411c51f245f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058360   13296 cache.go:107] acquiring lock: {Name:mk83df28ff2280226e553c74d5753355b2619711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058454   13296 cache.go:107] acquiring lock: {Name:mkc0c5d9ed8b25bc84d6acb50db9a02e43b1d63f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058413   13296 cache.go:107] acquiring lock: {Name:mkf6a3fb9f0ef20ee8634086f25477081fb99b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:27:59.058890   13296 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1030 11:27:59.058890   13296 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1030 11:27:59.058957   13296 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:27:59.058992   13296 start.go:360] acquireMachinesLock for test-preload-373000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:27:59.059055   13296 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1030 11:27:59.059069   13296 start.go:364] duration metric: took 70.292µs to acquireMachinesLock for "test-preload-373000"
	I1030 11:27:59.059083   13296 start.go:93] Provisioning new machine with config: &{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:27:59.059114   13296 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:27:59.059119   13296 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1030 11:27:59.059237   13296 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:27:59.059054   13296 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:27:59.059563   13296 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 11:27:59.062173   13296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:27:59.069717   13296 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1030 11:27:59.070333   13296 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1030 11:27:59.070315   13296 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:27:59.070433   13296 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1030 11:27:59.070460   13296 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1030 11:27:59.070521   13296 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:27:59.072103   13296 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1030 11:27:59.072364   13296 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:27:59.079846   13296 start.go:159] libmachine.API.Create for "test-preload-373000" (driver="qemu2")
	I1030 11:27:59.079869   13296 client.go:168] LocalClient.Create starting
	I1030 11:27:59.079952   13296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:27:59.079989   13296 main.go:141] libmachine: Decoding PEM data...
	I1030 11:27:59.079998   13296 main.go:141] libmachine: Parsing certificate...
	I1030 11:27:59.080042   13296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:27:59.080082   13296 main.go:141] libmachine: Decoding PEM data...
	I1030 11:27:59.080091   13296 main.go:141] libmachine: Parsing certificate...
	I1030 11:27:59.080464   13296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:27:59.248009   13296 main.go:141] libmachine: Creating SSH key...
	I1030 11:27:59.380794   13296 main.go:141] libmachine: Creating Disk image...
	I1030 11:27:59.380817   13296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:27:59.381102   13296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:27:59.391971   13296 main.go:141] libmachine: STDOUT: 
	I1030 11:27:59.391996   13296 main.go:141] libmachine: STDERR: 
	I1030 11:27:59.392063   13296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2 +20000M
	I1030 11:27:59.400984   13296 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:27:59.401002   13296 main.go:141] libmachine: STDERR: 
	I1030 11:27:59.401038   13296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:27:59.401042   13296 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:27:59.401055   13296 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:27:59.401082   13296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ad:f1:f7:e8:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:27:59.402912   13296 main.go:141] libmachine: STDOUT: 
	I1030 11:27:59.402927   13296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:27:59.402947   13296 client.go:171] duration metric: took 323.075ms to LocalClient.Create
	I1030 11:27:59.575435   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1030 11:27:59.584029   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1030 11:27:59.614244   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1030 11:27:59.615103   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1030 11:27:59.744003   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1030 11:27:59.807586   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1030 11:27:59.807604   13296 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 749.263459ms
	I1030 11:27:59.807620   13296 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1030 11:27:59.831401   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1030 11:27:59.894508   13296 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1030 11:27:59.894579   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W1030 11:28:00.426924   13296 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1030 11:28:00.427045   13296 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 11:28:00.904743   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1030 11:28:00.904790   13296 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.84648275s
	I1030 11:28:00.904819   13296 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1030 11:28:01.403192   13296 start.go:128] duration metric: took 2.344086375s to createHost
	I1030 11:28:01.403251   13296 start.go:83] releasing machines lock for "test-preload-373000", held for 2.344199958s
	W1030 11:28:01.403307   13296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:01.417280   13296 out.go:177] * Deleting "test-preload-373000" in qemu2 ...
	W1030 11:28:01.444953   13296 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:01.444971   13296 start.go:729] Will try again in 5 seconds ...
	I1030 11:28:01.943258   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1030 11:28:01.943318   13296 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.88497775s
	I1030 11:28:01.943346   13296 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1030 11:28:02.113461   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1030 11:28:02.113509   13296 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.055123167s
	I1030 11:28:02.113550   13296 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1030 11:28:03.269128   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1030 11:28:03.269189   13296 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.210845292s
	I1030 11:28:03.269226   13296 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1030 11:28:05.011939   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1030 11:28:05.011987   13296 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.953723958s
	I1030 11:28:05.012015   13296 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1030 11:28:05.186713   13296 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1030 11:28:05.186763   13296 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.128500333s
	I1030 11:28:05.186789   13296 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1030 11:28:06.445089   13296 start.go:360] acquireMachinesLock for test-preload-373000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:28:06.445590   13296 start.go:364] duration metric: took 421.5µs to acquireMachinesLock for "test-preload-373000"
	I1030 11:28:06.445719   13296 start.go:93] Provisioning new machine with config: &{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:28:06.445988   13296 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:28:06.456404   13296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:28:06.506592   13296 start.go:159] libmachine.API.Create for "test-preload-373000" (driver="qemu2")
	I1030 11:28:06.506663   13296 client.go:168] LocalClient.Create starting
	I1030 11:28:06.506876   13296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:28:06.506965   13296 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:06.506985   13296 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:06.507060   13296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:28:06.507118   13296 main.go:141] libmachine: Decoding PEM data...
	I1030 11:28:06.507134   13296 main.go:141] libmachine: Parsing certificate...
	I1030 11:28:06.507693   13296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:28:06.677745   13296 main.go:141] libmachine: Creating SSH key...
	I1030 11:28:06.818141   13296 main.go:141] libmachine: Creating Disk image...
	I1030 11:28:06.818149   13296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:28:06.818340   13296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:28:06.828405   13296 main.go:141] libmachine: STDOUT: 
	I1030 11:28:06.828421   13296 main.go:141] libmachine: STDERR: 
	I1030 11:28:06.828475   13296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2 +20000M
	I1030 11:28:06.837375   13296 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:28:06.837398   13296 main.go:141] libmachine: STDERR: 
	I1030 11:28:06.837413   13296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:28:06.837424   13296 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:28:06.837431   13296 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:28:06.837463   13296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ab:45:87:db:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/test-preload-373000/disk.qcow2
	I1030 11:28:06.839411   13296 main.go:141] libmachine: STDOUT: 
	I1030 11:28:06.839432   13296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:28:06.839447   13296 client.go:171] duration metric: took 332.756708ms to LocalClient.Create
	I1030 11:28:08.840322   13296 start.go:128] duration metric: took 2.394331917s to createHost
	I1030 11:28:08.840373   13296 start.go:83] releasing machines lock for "test-preload-373000", held for 2.394786792s
	W1030 11:28:08.840626   13296 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:28:08.857199   13296 out.go:201] 
	W1030 11:28:08.860345   13296 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:28:08.860398   13296 out.go:270] * 
	* 
	W1030 11:28:08.862880   13296 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:28:08.874197   13296 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-30 11:28:08.892729 -0700 PDT m=+688.076176668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-373000 -n test-preload-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-373000 -n test-preload-373000: exit status 7 (72.787583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-373000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-373000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-373000
--- FAIL: TestPreload (10.12s)

                                                
                                    
x
+
TestScheduledStopUnix (10.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-777000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-777000 --memory=2048 --driver=qemu2 : exit status 80 (9.964610708s)

                                                
                                                
-- stdout --
	* [scheduled-stop-777000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-777000" primary control-plane node in "scheduled-stop-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-777000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-777000" primary control-plane node in "scheduled-stop-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-30 11:28:19.010898 -0700 PDT m=+698.194464376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-777000 -n scheduled-stop-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-777000 -n scheduled-stop-777000: exit status 7 (74.522834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-777000
--- FAIL: TestScheduledStopUnix (10.12s)

                                                
                                    
x
+
TestSkaffold (12.35s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1881889253 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1881889253 version: (1.011973958s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-513000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-513000 --memory=2600 --driver=qemu2 : exit status 80 (9.820683s)

                                                
                                                
-- stdout --
	* [skaffold-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-513000" primary control-plane node in "skaffold-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-513000" primary control-plane node in "skaffold-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-30 11:28:31.36756 -0700 PDT m=+710.551272168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-513000 -n skaffold-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-513000 -n skaffold-513000: exit status 7 (69.883834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-513000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-513000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-513000
--- FAIL: TestSkaffold (12.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (596.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3875888825 start -p running-upgrade-135000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3875888825 start -p running-upgrade-135000 --memory=2200 --vm-driver=qemu2 : (59.062166583s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-135000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-135000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.395474792s)

                                                
                                                
-- stdout --
	* [running-upgrade-135000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-135000" primary control-plane node in "running-upgrade-135000" cluster
	* Updating the running qemu2 "running-upgrade-135000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:30:13.178215   13969 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:30:13.178376   13969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:30:13.178380   13969 out.go:358] Setting ErrFile to fd 2...
	I1030 11:30:13.178383   13969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:30:13.178521   13969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:30:13.179652   13969 out.go:352] Setting JSON to false
	I1030 11:30:13.198509   13969 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7184,"bootTime":1730305829,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:30:13.198585   13969 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:30:13.203464   13969 out.go:177] * [running-upgrade-135000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:30:13.214403   13969 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:30:13.214440   13969 notify.go:220] Checking for updates...
	I1030 11:30:13.222406   13969 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:30:13.226372   13969 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:30:13.233260   13969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:30:13.236431   13969 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:30:13.239367   13969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:30:13.242656   13969 config.go:182] Loaded profile config "running-upgrade-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:30:13.246365   13969 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 11:30:13.249397   13969 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:30:13.253390   13969 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:30:13.260378   13969 start.go:297] selected driver: qemu2
	I1030 11:30:13.260385   13969 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57199 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:30:13.260454   13969 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:30:13.263346   13969 cni.go:84] Creating CNI manager for ""
	I1030 11:30:13.263380   13969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:30:13.263401   13969 start.go:340] cluster config:
	{Name:running-upgrade-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57199 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:30:13.263457   13969 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:30:13.272381   13969 out.go:177] * Starting "running-upgrade-135000" primary control-plane node in "running-upgrade-135000" cluster
	I1030 11:30:13.276397   13969 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:30:13.276416   13969 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1030 11:30:13.276423   13969 cache.go:56] Caching tarball of preloaded images
	I1030 11:30:13.276475   13969 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:30:13.276481   13969 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1030 11:30:13.276545   13969 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/config.json ...
	I1030 11:30:13.277038   13969 start.go:360] acquireMachinesLock for running-upgrade-135000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:30:13.277074   13969 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "running-upgrade-135000"
	I1030 11:30:13.277084   13969 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:30:13.277089   13969 fix.go:54] fixHost starting: 
	I1030 11:30:13.277769   13969 fix.go:112] recreateIfNeeded on running-upgrade-135000: state=Running err=<nil>
	W1030 11:30:13.277781   13969 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:30:13.281440   13969 out.go:177] * Updating the running qemu2 "running-upgrade-135000" VM ...
	I1030 11:30:13.285331   13969 machine.go:93] provisionDockerMachine start ...
	I1030 11:30:13.285406   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.285562   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.285568   13969 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 11:30:13.346415   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-135000
	
	I1030 11:30:13.346432   13969 buildroot.go:166] provisioning hostname "running-upgrade-135000"
	I1030 11:30:13.346501   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.346610   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.346616   13969 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-135000 && echo "running-upgrade-135000" | sudo tee /etc/hostname
	I1030 11:30:13.412095   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-135000
	
	I1030 11:30:13.412164   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.412288   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.412296   13969 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-135000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-135000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-135000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 11:30:13.469191   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 11:30:13.469201   13969 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19883-11536/.minikube CaCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19883-11536/.minikube}
	I1030 11:30:13.469213   13969 buildroot.go:174] setting up certificates
	I1030 11:30:13.469219   13969 provision.go:84] configureAuth start
	I1030 11:30:13.469225   13969 provision.go:143] copyHostCerts
	I1030 11:30:13.469291   13969 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem, removing ...
	I1030 11:30:13.469296   13969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem
	I1030 11:30:13.469410   13969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem (1123 bytes)
	I1030 11:30:13.469604   13969 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem, removing ...
	I1030 11:30:13.469607   13969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem
	I1030 11:30:13.469648   13969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem (1675 bytes)
	I1030 11:30:13.469753   13969 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem, removing ...
	I1030 11:30:13.469757   13969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem
	I1030 11:30:13.469806   13969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem (1082 bytes)
	I1030 11:30:13.469902   13969 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-135000 san=[127.0.0.1 localhost minikube running-upgrade-135000]
	I1030 11:30:13.569018   13969 provision.go:177] copyRemoteCerts
	I1030 11:30:13.569079   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 11:30:13.569087   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:30:13.600812   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 11:30:13.608353   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 11:30:13.615469   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 11:30:13.621982   13969 provision.go:87] duration metric: took 152.758ms to configureAuth
	I1030 11:30:13.621991   13969 buildroot.go:189] setting minikube options for container-runtime
	I1030 11:30:13.622085   13969 config.go:182] Loaded profile config "running-upgrade-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:30:13.622131   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.622216   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.622220   13969 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1030 11:30:13.684313   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1030 11:30:13.684321   13969 buildroot.go:70] root file system type: tmpfs
	I1030 11:30:13.684366   13969 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1030 11:30:13.684411   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.684508   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.684541   13969 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1030 11:30:13.746108   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1030 11:30:13.746166   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.746305   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.746313   13969 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1030 11:30:13.809133   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 11:30:13.809145   13969 machine.go:96] duration metric: took 523.814083ms to provisionDockerMachine
	I1030 11:30:13.809150   13969 start.go:293] postStartSetup for "running-upgrade-135000" (driver="qemu2")
	I1030 11:30:13.809157   13969 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 11:30:13.809219   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 11:30:13.809229   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:30:13.842064   13969 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 11:30:13.843501   13969 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 11:30:13.843509   13969 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/addons for local assets ...
	I1030 11:30:13.843570   13969 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/files for local assets ...
	I1030 11:30:13.843675   13969 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem -> 120432.pem in /etc/ssl/certs
	I1030 11:30:13.843783   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 11:30:13.846801   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:30:13.858712   13969 start.go:296] duration metric: took 49.554416ms for postStartSetup
	I1030 11:30:13.858732   13969 fix.go:56] duration metric: took 581.650958ms for fixHost
	I1030 11:30:13.858792   13969 main.go:141] libmachine: Using SSH client type: native
	I1030 11:30:13.858907   13969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10452e5f0] 0x104530e30 <nil>  [] 0s} localhost 57167 <nil> <nil>}
	I1030 11:30:13.858915   13969 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 11:30:13.916368   13969 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313014.212749930
	
	I1030 11:30:13.916375   13969 fix.go:216] guest clock: 1730313014.212749930
	I1030 11:30:13.916379   13969 fix.go:229] Guest: 2024-10-30 11:30:14.21274993 -0700 PDT Remote: 2024-10-30 11:30:13.858734 -0700 PDT m=+0.703252918 (delta=354.01593ms)
	I1030 11:30:13.916396   13969 fix.go:200] guest clock delta is within tolerance: 354.01593ms
	I1030 11:30:13.916398   13969 start.go:83] releasing machines lock for "running-upgrade-135000", held for 639.326833ms
	I1030 11:30:13.916467   13969 ssh_runner.go:195] Run: cat /version.json
	I1030 11:30:13.916475   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:30:13.916482   13969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 11:30:13.916498   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	W1030 11:30:13.917031   13969 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57302->127.0.0.1:57167: read: connection reset by peer
	I1030 11:30:13.917050   13969 retry.go:31] will retry after 203.068012ms: ssh: handshake failed: read tcp 127.0.0.1:57302->127.0.0.1:57167: read: connection reset by peer
	W1030 11:30:14.153762   13969 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1030 11:30:14.153838   13969 ssh_runner.go:195] Run: systemctl --version
	I1030 11:30:14.155868   13969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 11:30:14.157527   13969 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 11:30:14.157568   13969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1030 11:30:14.160664   13969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1030 11:30:14.164972   13969 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 11:30:14.164978   13969 start.go:495] detecting cgroup driver to use...
	I1030 11:30:14.165115   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:30:14.170391   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1030 11:30:14.173434   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1030 11:30:14.176515   13969 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1030 11:30:14.176547   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1030 11:30:14.180205   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:30:14.183072   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1030 11:30:14.185940   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:30:14.189246   13969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 11:30:14.192591   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1030 11:30:14.195550   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1030 11:30:14.198374   13969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1030 11:30:14.201419   13969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 11:30:14.204072   13969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 11:30:14.206731   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:14.299984   13969 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1030 11:30:14.307544   13969 start.go:495] detecting cgroup driver to use...
	I1030 11:30:14.307654   13969 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1030 11:30:14.313159   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:30:14.320027   13969 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 11:30:14.325794   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:30:14.334971   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1030 11:30:14.339637   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:30:14.344594   13969 ssh_runner.go:195] Run: which cri-dockerd
	I1030 11:30:14.345998   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1030 11:30:14.348931   13969 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1030 11:30:14.353964   13969 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1030 11:30:14.445870   13969 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1030 11:30:14.542956   13969 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1030 11:30:14.543012   13969 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1030 11:30:14.548737   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:14.634465   13969 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:30:17.237899   13969 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.603449459s)
	I1030 11:30:17.237984   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1030 11:30:17.243139   13969 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1030 11:30:17.249407   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:30:17.254743   13969 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1030 11:30:17.332178   13969 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1030 11:30:17.411431   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:17.493457   13969 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1030 11:30:17.499797   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:30:17.504314   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:17.585511   13969 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1030 11:30:17.626354   13969 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1030 11:30:17.626442   13969 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1030 11:30:17.628565   13969 start.go:563] Will wait 60s for crictl version
	I1030 11:30:17.628618   13969 ssh_runner.go:195] Run: which crictl
	I1030 11:30:17.629991   13969 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 11:30:17.642189   13969 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1030 11:30:17.642262   13969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:30:17.654938   13969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:30:17.676350   13969 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1030 11:30:17.676493   13969 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1030 11:30:17.677837   13969 kubeadm.go:883] updating cluster {Name:running-upgrade-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57199 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1030 11:30:17.677877   13969 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:30:17.677923   13969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:30:17.693391   13969 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:30:17.693400   13969 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:30:17.693454   13969 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:30:17.696464   13969 ssh_runner.go:195] Run: which lz4
	I1030 11:30:17.697859   13969 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 11:30:17.699140   13969 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 11:30:17.699150   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1030 11:30:18.709173   13969 docker.go:653] duration metric: took 1.011387333s to copy over tarball
	I1030 11:30:18.709247   13969 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 11:30:19.805042   13969 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.095784708s)
	I1030 11:30:19.805056   13969 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 11:30:19.821594   13969 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:30:19.825125   13969 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1030 11:30:19.830565   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:19.912927   13969 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:30:21.123809   13969 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.21088s)
	I1030 11:30:21.123909   13969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:30:21.135063   13969 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:30:21.135071   13969 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:30:21.135075   13969 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 11:30:21.140453   13969 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:30:21.142582   13969 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:30:21.144806   13969 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:30:21.144933   13969 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1030 11:30:21.146964   13969 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:30:21.147124   13969 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:30:21.148389   13969 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1030 11:30:21.148392   13969 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:30:21.149310   13969 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:30:21.149606   13969 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:30:21.150886   13969 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:30:21.151270   13969 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:30:21.151927   13969 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:30:21.151950   13969 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:30:21.153243   13969 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:30:21.153871   13969 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:30:21.556559   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:30:21.567987   13969 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1030 11:30:21.568022   13969 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:30:21.568075   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:30:21.578672   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1030 11:30:21.609721   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1030 11:30:21.620437   13969 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1030 11:30:21.620463   13969 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1030 11:30:21.620518   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1030 11:30:21.632186   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1030 11:30:21.632321   13969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1030 11:30:21.634796   13969 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1030 11:30:21.634815   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1030 11:30:21.643870   13969 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1030 11:30:21.643906   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1030 11:30:21.673771   13969 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1030 11:30:21.724828   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1030 11:30:21.739633   13969 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1030 11:30:21.739662   13969 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:30:21.739738   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W1030 11:30:21.744239   13969 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1030 11:30:21.744394   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:30:21.753524   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1030 11:30:21.759549   13969 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1030 11:30:21.759570   13969 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:30:21.759627   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:30:21.769656   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1030 11:30:21.769804   13969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:30:21.771342   13969 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1030 11:30:21.771351   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1030 11:30:21.816879   13969 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:30:21.816895   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1030 11:30:21.837940   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:30:21.867219   13969 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1030 11:30:21.867272   13969 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1030 11:30:21.867289   13969 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:30:21.867355   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:30:21.873032   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:30:21.883018   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1030 11:30:21.888237   13969 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1030 11:30:21.888255   13969 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:30:21.888311   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:30:21.902194   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1030 11:30:21.943280   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:30:21.962664   13969 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1030 11:30:21.962688   13969 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:30:21.962761   13969 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:30:21.983377   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1030 11:30:22.114282   13969 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1030 11:30:22.114416   13969 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:30:22.134923   13969 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1030 11:30:22.134956   13969 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:30:22.135029   13969 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:30:22.207689   13969 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 11:30:22.207835   13969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:30:22.209727   13969 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1030 11:30:22.209750   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1030 11:30:22.263696   13969 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:30:22.263712   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1030 11:30:22.779178   13969 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 11:30:22.779215   13969 cache_images.go:92] duration metric: took 1.644153417s to LoadCachedImages
	W1030 11:30:22.779258   13969 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1030 11:30:22.779264   13969 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1030 11:30:22.779322   13969 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-135000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 11:30:22.779399   13969 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1030 11:30:22.807868   13969 cni.go:84] Creating CNI manager for ""
	I1030 11:30:22.807881   13969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:30:22.807888   13969 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 11:30:22.807897   13969 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-135000 NodeName:running-upgrade-135000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 11:30:22.807965   13969 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-135000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 11:30:22.808215   13969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1030 11:30:22.811480   13969 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 11:30:22.811521   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 11:30:22.814274   13969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1030 11:30:22.819115   13969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 11:30:22.833499   13969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1030 11:30:22.842492   13969 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1030 11:30:22.843646   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:30:22.943736   13969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:30:22.948698   13969 certs.go:68] Setting up /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000 for IP: 10.0.2.15
	I1030 11:30:22.948716   13969 certs.go:194] generating shared ca certs ...
	I1030 11:30:22.948724   13969 certs.go:226] acquiring lock for ca certs: {Name:mke98b939cb7b412ec11c6499518b74392aa286f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:30:22.948978   13969 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key
	I1030 11:30:22.949017   13969 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key
	I1030 11:30:22.949023   13969 certs.go:256] generating profile certs ...
	I1030 11:30:22.949086   13969 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.key
	I1030 11:30:22.949101   13969 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key.d50fd44e
	I1030 11:30:22.949114   13969 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt.d50fd44e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1030 11:30:23.026532   13969 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt.d50fd44e ...
	I1030 11:30:23.026537   13969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt.d50fd44e: {Name:mk9ecf1652b7080ed5b66862535a9bbbe63e4d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:30:23.026816   13969 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key.d50fd44e ...
	I1030 11:30:23.026820   13969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key.d50fd44e: {Name:mkade505eb3499ebe9f3cb895cc427d4d0764fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:30:23.026974   13969 certs.go:381] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt.d50fd44e -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt
	I1030 11:30:23.027097   13969 certs.go:385] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key.d50fd44e -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key
	I1030 11:30:23.027225   13969 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/proxy-client.key
	I1030 11:30:23.027366   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem (1338 bytes)
	W1030 11:30:23.027390   13969 certs.go:480] ignoring /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043_empty.pem, impossibly tiny 0 bytes
	I1030 11:30:23.027394   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem (1675 bytes)
	I1030 11:30:23.027414   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem (1082 bytes)
	I1030 11:30:23.027432   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem (1123 bytes)
	I1030 11:30:23.027453   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem (1675 bytes)
	I1030 11:30:23.027491   13969 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:30:23.027936   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 11:30:23.034890   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 11:30:23.041371   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 11:30:23.048886   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 11:30:23.056630   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 11:30:23.064034   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 11:30:23.070786   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 11:30:23.077391   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 11:30:23.084822   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /usr/share/ca-certificates/120432.pem (1708 bytes)
	I1030 11:30:23.092203   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 11:30:23.099009   13969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem --> /usr/share/ca-certificates/12043.pem (1338 bytes)
	I1030 11:30:23.105762   13969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 11:30:23.110855   13969 ssh_runner.go:195] Run: openssl version
	I1030 11:30:23.112657   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120432.pem && ln -fs /usr/share/ca-certificates/120432.pem /etc/ssl/certs/120432.pem"
	I1030 11:30:23.115890   13969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120432.pem
	I1030 11:30:23.117205   13969 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:17 /usr/share/ca-certificates/120432.pem
	I1030 11:30:23.117232   13969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120432.pem
	I1030 11:30:23.119259   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/120432.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 11:30:23.122016   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 11:30:23.125434   13969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:30:23.126943   13969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:30:23.126969   13969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:30:23.128854   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 11:30:23.131597   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12043.pem && ln -fs /usr/share/ca-certificates/12043.pem /etc/ssl/certs/12043.pem"
	I1030 11:30:23.134614   13969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12043.pem
	I1030 11:30:23.136139   13969 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:17 /usr/share/ca-certificates/12043.pem
	I1030 11:30:23.136167   13969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12043.pem
	I1030 11:30:23.138111   13969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12043.pem /etc/ssl/certs/51391683.0"
	I1030 11:30:23.141209   13969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 11:30:23.142787   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 11:30:23.144686   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 11:30:23.146669   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 11:30:23.148568   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 11:30:23.150442   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 11:30:23.152139   13969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 11:30:23.153851   13969 kubeadm.go:392] StartCluster: {Name:running-upgrade-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57199 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:30:23.153923   13969 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:30:23.164671   13969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 11:30:23.169192   13969 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 11:30:23.169203   13969 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 11:30:23.169234   13969 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 11:30:23.172549   13969 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:30:23.172591   13969 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-135000" does not appear in /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:30:23.172607   13969 kubeconfig.go:62] /Users/jenkins/minikube-integration/19883-11536/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-135000" cluster setting kubeconfig missing "running-upgrade-135000" context setting]
	I1030 11:30:23.172784   13969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:30:23.173503   13969 kapi.go:59] client config for running-upgrade-135000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f8a7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:30:23.174494   13969 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 11:30:23.177277   13969 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-135000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1030 11:30:23.177282   13969 kubeadm.go:1160] stopping kube-system containers ...
	I1030 11:30:23.177330   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:30:23.188344   13969 docker.go:483] Stopping containers: [a3241ffb7a74 7432793e6ec0 44235af5404a 1e8beefcc8ec 69ec2e01c390 1a40594035eb 74bfb2571712 d667d81acd6c d810f1ef3606 35e054d9693f 317635699a4e e5aecc961048 49760f7fb011 6daaaaa8fe5d 6a5c68616eeb 9b049d530452 209e05cda0bd 2e70d7c55ae3 39d16bdc72d4 388ecdf3efe0 a3b41ff2eb94]
	I1030 11:30:23.188423   13969 ssh_runner.go:195] Run: docker stop a3241ffb7a74 7432793e6ec0 44235af5404a 1e8beefcc8ec 69ec2e01c390 1a40594035eb 74bfb2571712 d667d81acd6c d810f1ef3606 35e054d9693f 317635699a4e e5aecc961048 49760f7fb011 6daaaaa8fe5d 6a5c68616eeb 9b049d530452 209e05cda0bd 2e70d7c55ae3 39d16bdc72d4 388ecdf3efe0 a3b41ff2eb94
	I1030 11:30:24.058594   13969 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 11:30:24.155155   13969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:30:24.158837   13969 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 30 18:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 30 18:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 30 18:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct 30 18:30 /etc/kubernetes/scheduler.conf
	
	I1030 11:30:24.158881   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf
	I1030 11:30:24.163548   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:30:24.163584   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:30:24.169048   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf
	I1030 11:30:24.174834   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:30:24.174868   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:30:24.180353   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf
	I1030 11:30:24.185413   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:30:24.185457   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:30:24.189676   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf
	I1030 11:30:24.193163   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:30:24.193206   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:30:24.200406   13969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:30:24.203616   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:30:24.238657   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:30:24.715003   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:30:24.917012   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:30:24.937592   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:30:24.963069   13969 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:30:24.963162   13969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:30:25.465496   13969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:30:25.965196   13969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:30:25.970047   13969 api_server.go:72] duration metric: took 1.006991458s to wait for apiserver process to appear ...
	I1030 11:30:25.970061   13969 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:30:25.970094   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:30.972105   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:30.972171   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:35.972542   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:35.972647   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:40.973763   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:40.973887   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:45.974959   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:45.975055   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:50.976503   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:50.976629   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:30:55.978812   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:30:55.978904   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:00.981524   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:00.981601   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:05.983134   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:05.983235   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:10.985758   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:10.985842   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:15.988383   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:15.988457   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:20.991042   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:20.991139   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:25.993785   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:25.994085   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:31:26.029339   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:31:26.029446   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:31:26.051325   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:31:26.051407   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:31:26.062484   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:31:26.062554   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:31:26.072868   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:31:26.072951   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:31:26.083923   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:31:26.084011   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:31:26.094270   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:31:26.094344   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:31:26.104456   13969 logs.go:282] 0 containers: []
	W1030 11:31:26.104468   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:31:26.104537   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:31:26.115001   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:31:26.115020   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:31:26.115035   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:31:26.128842   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:31:26.128852   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:31:26.140930   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:31:26.140945   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:31:26.152214   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:31:26.152228   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:31:26.188974   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:31:26.188981   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:31:26.263052   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:31:26.263062   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:31:26.275370   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:31:26.275381   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:31:26.287201   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:31:26.287212   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:31:26.299154   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:31:26.299166   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:31:26.316052   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:31:26.316065   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:31:26.327092   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:31:26.327101   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:31:26.331281   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:31:26.331286   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:31:26.347538   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:31:26.347548   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:31:26.361270   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:31:26.361282   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:31:26.386262   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:31:26.386270   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:31:26.398834   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:31:26.398846   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:31:26.410222   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:31:26.410234   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:31:28.923854   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:33.926177   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:33.926806   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:31:33.965528   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:31:33.965684   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:31:33.986782   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:31:33.986918   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:31:34.001829   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:31:34.001918   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:31:34.014148   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:31:34.014230   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:31:34.025496   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:31:34.025582   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:31:34.038500   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:31:34.038581   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:31:34.052859   13969 logs.go:282] 0 containers: []
	W1030 11:31:34.052874   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:31:34.052943   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:31:34.068775   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:31:34.068798   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:31:34.068803   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:31:34.086799   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:31:34.086811   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:31:34.112226   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:31:34.112234   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:31:34.124653   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:31:34.124665   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:31:34.161847   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:31:34.161861   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:31:34.166394   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:31:34.166401   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:31:34.179729   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:31:34.179741   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:31:34.191061   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:31:34.191073   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:31:34.225218   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:31:34.225228   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:31:34.241774   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:31:34.241787   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:31:34.253299   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:31:34.253309   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:31:34.265264   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:31:34.265273   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:31:34.276323   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:31:34.276334   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:31:34.291828   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:31:34.291837   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:31:34.302735   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:31:34.302745   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:31:34.316741   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:31:34.316752   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:31:34.335059   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:31:34.335070   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:31:36.848892   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:41.851727   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:41.852355   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:31:41.892951   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:31:41.893106   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:31:41.915524   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:31:41.915651   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:31:41.931063   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:31:41.931152   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:31:41.943802   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:31:41.943898   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:31:41.954617   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:31:41.954701   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:31:41.965634   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:31:41.965708   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:31:41.976052   13969 logs.go:282] 0 containers: []
	W1030 11:31:41.976065   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:31:41.976144   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:31:41.986826   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:31:41.986843   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:31:41.986848   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:31:41.991299   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:31:41.991309   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:31:42.004745   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:31:42.004760   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:31:42.016718   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:31:42.016730   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:31:42.028607   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:31:42.028618   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:31:42.064148   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:31:42.064159   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:31:42.078267   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:31:42.078279   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:31:42.090322   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:31:42.090333   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:31:42.101847   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:31:42.101864   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:31:42.113866   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:31:42.113880   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:31:42.153253   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:31:42.153266   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:31:42.167570   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:31:42.167582   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:31:42.180010   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:31:42.180024   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:31:42.191798   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:31:42.191811   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:31:42.203414   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:31:42.203427   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:31:42.221569   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:31:42.221596   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:31:42.236525   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:31:42.236535   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:31:44.764961   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:49.767132   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:49.767815   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:31:49.807008   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:31:49.807170   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:31:49.833120   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:31:49.833240   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:31:49.847531   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:31:49.847621   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:31:49.859873   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:31:49.859953   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:31:49.870474   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:31:49.870552   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:31:49.881726   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:31:49.881800   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:31:49.892461   13969 logs.go:282] 0 containers: []
	W1030 11:31:49.892472   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:31:49.892539   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:31:49.903105   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:31:49.903124   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:31:49.903129   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:31:49.917305   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:31:49.917319   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:31:49.933895   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:31:49.933908   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:31:49.945727   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:31:49.945740   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:31:49.950470   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:31:49.950476   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:31:49.964368   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:31:49.964377   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:31:49.975894   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:31:49.975906   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:31:49.987626   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:31:49.987638   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:31:50.013819   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:31:50.013834   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:31:50.031954   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:31:50.031969   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:31:50.043541   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:31:50.043554   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:31:50.078582   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:31:50.078592   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:31:50.112272   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:31:50.112287   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:31:50.124724   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:31:50.124737   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:31:50.138287   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:31:50.138300   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:31:50.150246   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:31:50.150257   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:31:50.162027   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:31:50.162039   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:31:52.675854   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:31:57.677647   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:31:57.678247   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:31:57.717511   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:31:57.717680   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:31:57.748516   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:31:57.748615   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:31:57.765053   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:31:57.765136   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:31:57.776701   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:31:57.776783   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:31:57.791839   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:31:57.791919   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:31:57.809361   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:31:57.809442   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:31:57.823130   13969 logs.go:282] 0 containers: []
	W1030 11:31:57.823143   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:31:57.823231   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:31:57.833659   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:31:57.833677   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:31:57.833683   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:31:57.838212   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:31:57.838220   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:31:57.855204   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:31:57.855215   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:31:57.867835   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:31:57.867845   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:31:57.881580   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:31:57.881591   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:31:57.893475   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:31:57.893488   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:31:57.905824   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:31:57.905834   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:31:57.923341   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:31:57.923351   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:31:57.936636   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:31:57.936649   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:31:57.948599   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:31:57.948611   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:31:57.973433   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:31:57.973441   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:31:57.985151   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:31:57.985163   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:31:58.020907   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:31:58.020921   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:31:58.033385   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:31:58.033395   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:31:58.050840   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:31:58.050851   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:31:58.087670   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:31:58.087681   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:31:58.100297   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:31:58.100312   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:00.614178   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:05.616566   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:05.616873   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:05.643928   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:05.644061   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:05.662782   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:05.662885   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:05.676084   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:05.676162   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:05.687454   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:05.687523   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:05.697702   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:05.697773   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:05.712416   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:05.712496   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:05.723338   13969 logs.go:282] 0 containers: []
	W1030 11:32:05.723349   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:05.723424   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:05.733672   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:05.733689   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:05.733694   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:05.769301   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:05.769309   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:05.803752   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:05.803766   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:05.815389   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:05.815399   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:05.828046   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:05.828057   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:05.839002   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:05.839014   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:05.852714   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:05.852726   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:05.876825   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:05.876838   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:05.889923   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:05.889937   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:05.901770   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:05.901782   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:05.921393   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:05.921406   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:05.945758   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:05.945767   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:05.957263   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:05.957275   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:05.961923   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:05.961932   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:05.974171   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:05.974183   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:05.987950   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:05.987962   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:06.000982   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:06.000994   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:08.514821   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:13.517531   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:13.517792   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:13.543592   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:13.543728   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:13.559873   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:13.559973   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:13.572682   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:13.572759   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:13.583772   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:13.583853   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:13.594378   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:13.594456   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:13.605127   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:13.605203   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:13.616944   13969 logs.go:282] 0 containers: []
	W1030 11:32:13.616956   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:13.617021   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:13.627746   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:13.627765   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:13.627770   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:13.641597   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:13.641610   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:13.656012   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:13.656023   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:13.681727   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:13.681734   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:13.718785   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:13.718792   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:13.723023   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:13.723032   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:13.734471   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:13.734482   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:13.748521   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:13.748531   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:13.771831   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:13.771844   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:13.784908   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:13.784918   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:13.802479   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:13.802490   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:13.814111   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:13.814125   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:13.825694   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:13.825704   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:13.860414   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:13.860427   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:13.878404   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:13.878415   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:13.890380   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:13.890393   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:13.903747   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:13.903758   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:16.416623   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:21.419487   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:21.420046   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:21.459821   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:21.459980   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:21.482201   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:21.482334   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:21.497982   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:21.498065   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:21.510551   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:21.510628   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:21.521365   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:21.521451   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:21.532653   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:21.532730   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:21.542956   13969 logs.go:282] 0 containers: []
	W1030 11:32:21.542966   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:21.543028   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:21.559544   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:21.559561   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:21.559565   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:21.596714   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:21.596722   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:21.633604   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:21.633619   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:21.645622   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:21.645635   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:21.669925   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:21.669933   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:21.681491   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:21.681505   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:21.697194   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:21.697202   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:21.711913   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:21.711924   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:21.725622   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:21.725633   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:21.737108   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:21.737123   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:21.749785   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:21.749797   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:21.768111   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:21.768120   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:21.779690   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:21.779703   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:21.784032   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:21.784041   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:21.798351   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:21.798363   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:21.809426   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:21.809439   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:21.822462   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:21.822475   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:24.335828   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:29.338507   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:29.339031   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:29.379577   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:29.379745   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:29.404070   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:29.404200   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:29.425422   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:29.425511   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:29.451534   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:29.451608   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:29.464940   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:29.465017   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:29.475734   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:29.475813   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:29.486263   13969 logs.go:282] 0 containers: []
	W1030 11:32:29.486278   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:29.486339   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:29.497127   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:29.497141   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:29.497152   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:29.501821   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:29.501830   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:29.527216   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:29.527227   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:29.540883   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:29.540896   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:29.562039   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:29.562051   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:29.574494   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:29.574506   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:29.589700   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:29.589711   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:29.607120   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:29.607133   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:29.620437   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:29.620447   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:29.659373   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:29.659387   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:29.674004   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:29.674014   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:29.691054   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:29.691070   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:29.702073   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:29.702085   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:29.713641   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:29.713654   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:29.748086   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:29.748096   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:29.759467   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:29.759477   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:29.778164   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:29.778175   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:32.295124   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:37.297326   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:37.297804   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:37.330057   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:37.330207   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:37.348773   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:37.348884   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:37.371667   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:37.371769   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:37.384328   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:37.384410   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:37.394970   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:37.395049   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:37.405861   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:37.405935   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:37.416264   13969 logs.go:282] 0 containers: []
	W1030 11:32:37.416277   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:37.416342   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:37.427371   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:37.427388   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:37.427393   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:37.438724   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:37.438734   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:37.458250   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:37.458261   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:37.470704   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:37.470716   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:37.493990   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:37.493999   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:37.505173   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:37.505184   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:37.517262   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:37.517271   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:37.552958   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:37.552969   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:37.558227   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:37.558237   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:37.572163   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:37.572172   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:37.583757   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:37.583770   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:37.595491   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:37.595502   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:37.607087   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:37.607097   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:37.642659   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:37.642666   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:37.656843   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:37.656854   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:37.668631   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:37.668643   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:37.680190   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:37.680201   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:40.199987   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:45.200278   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:45.200418   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:45.211611   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:45.211695   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:45.224891   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:45.224966   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:45.235625   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:45.235702   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:45.246445   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:45.246529   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:45.256961   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:45.257040   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:45.267390   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:45.267463   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:45.277049   13969 logs.go:282] 0 containers: []
	W1030 11:32:45.277061   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:45.277121   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:45.287162   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:45.287180   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:45.287188   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:45.298559   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:45.298570   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:45.310249   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:45.310260   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:45.347257   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:45.347265   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:45.381518   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:45.381530   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:45.395752   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:45.395762   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:45.407924   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:45.407936   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:45.412081   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:45.412089   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:45.436383   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:45.436389   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:45.471497   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:45.471508   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:45.486863   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:45.486875   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:45.507296   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:45.507307   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:45.520147   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:45.520156   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:45.531058   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:45.531071   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:45.543094   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:45.543106   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:45.554707   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:45.554721   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:45.572887   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:45.572898   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:48.086641   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:32:53.088018   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:32:53.088113   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:32:53.107401   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:32:53.107481   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:32:53.118448   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:32:53.118526   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:32:53.130111   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:32:53.130190   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:32:53.140974   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:32:53.141059   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:32:53.151456   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:32:53.151532   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:32:53.162669   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:32:53.162757   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:32:53.173513   13969 logs.go:282] 0 containers: []
	W1030 11:32:53.173526   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:32:53.173596   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:32:53.185168   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:32:53.185186   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:32:53.185192   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:32:53.203637   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:32:53.203648   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:32:53.216309   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:32:53.216325   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:32:53.232451   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:32:53.232465   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:32:53.246343   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:32:53.246354   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:32:53.258188   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:32:53.258200   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:32:53.270688   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:32:53.270700   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:32:53.292199   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:32:53.292212   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:32:53.296894   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:32:53.296901   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:32:53.311734   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:32:53.311748   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:32:53.329172   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:32:53.329187   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:32:53.341034   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:32:53.341046   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:32:53.352657   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:32:53.352672   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:32:53.376715   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:32:53.376726   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:32:53.390733   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:32:53.390745   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:32:53.430821   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:32:53.430841   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:32:53.468128   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:32:53.468143   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:32:55.982730   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:00.983896   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:00.984071   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:00.995628   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:00.995708   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:01.006482   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:01.006558   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:01.017533   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:01.017609   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:01.028589   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:01.028663   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:01.039250   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:01.039326   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:01.049777   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:01.049861   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:01.060552   13969 logs.go:282] 0 containers: []
	W1030 11:33:01.060562   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:01.060628   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:01.071437   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:01.071456   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:01.071461   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:01.083573   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:01.083584   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:01.095166   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:01.095178   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:01.108683   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:01.108693   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:01.121174   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:01.121189   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:01.132814   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:01.132826   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:01.157131   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:01.157142   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:01.171285   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:01.171298   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:01.185640   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:01.185652   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:01.197062   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:01.197077   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:01.215100   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:01.215111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:01.228876   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:01.228886   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:01.240060   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:01.240071   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:01.254507   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:01.254520   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:01.258966   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:01.258973   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:01.294730   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:01.294740   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:01.306730   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:01.306739   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:03.845661   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:08.846431   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:08.846554   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:08.859254   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:08.859346   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:08.871308   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:08.871399   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:08.884772   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:08.884849   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:08.896968   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:08.897059   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:08.909148   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:08.909231   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:08.921290   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:08.921369   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:08.932668   13969 logs.go:282] 0 containers: []
	W1030 11:33:08.932685   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:08.932772   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:08.949200   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:08.949219   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:08.949224   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:08.976439   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:08.976455   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:08.997299   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:08.997311   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:09.023253   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:09.023267   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:09.037927   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:09.037940   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:09.050079   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:09.050090   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:09.062450   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:09.062462   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:09.074460   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:09.074472   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:09.115035   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:09.115052   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:09.130537   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:09.130550   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:09.153908   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:09.153924   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:09.192643   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:09.192654   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:09.209938   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:09.209950   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:09.223369   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:09.223382   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:09.237118   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:09.237129   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:09.241945   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:09.241952   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:09.254660   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:09.254675   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:11.769438   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:16.770612   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:16.771157   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:16.813079   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:16.813233   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:16.837869   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:16.837987   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:16.862555   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:16.862644   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:16.873382   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:16.873459   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:16.883908   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:16.883989   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:16.894355   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:16.894435   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:16.905274   13969 logs.go:282] 0 containers: []
	W1030 11:33:16.905286   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:16.905354   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:16.916138   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:16.916155   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:16.916163   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:16.929838   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:16.929851   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:16.942794   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:16.942808   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:16.956904   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:16.956914   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:16.968131   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:16.968143   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:16.980129   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:16.980138   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:17.003915   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:17.003926   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:17.040207   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:17.040220   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:17.056091   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:17.056104   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:17.080445   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:17.080457   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:17.092279   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:17.092289   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:17.103100   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:17.103114   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:17.114720   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:17.114733   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:17.150493   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:17.150503   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:17.163002   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:17.163013   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:17.168594   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:17.168604   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:17.180584   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:17.180594   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:19.703877   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:24.705821   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:24.706586   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:24.746989   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:24.747140   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:24.769850   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:24.770012   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:24.785514   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:24.785598   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:24.798269   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:24.798340   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:24.808801   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:24.808880   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:24.819448   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:24.819522   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:24.829791   13969 logs.go:282] 0 containers: []
	W1030 11:33:24.829803   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:24.829862   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:24.840716   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:24.840735   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:24.840744   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:24.852691   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:24.852702   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:24.864205   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:24.864217   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:24.875668   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:24.875689   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:24.897034   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:24.897046   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:24.901410   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:24.901421   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:24.940055   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:24.940070   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:24.954564   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:24.954577   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:24.968507   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:24.968516   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:24.983102   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:24.983111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:25.000667   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:25.000680   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:25.011819   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:25.011828   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:25.024319   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:25.024327   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:25.063641   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:25.063651   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:25.075021   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:25.075030   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:25.098157   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:25.098171   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:25.128593   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:25.128606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:27.656453   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:32.659098   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:32.659315   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:32.671617   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:32.671706   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:32.683242   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:32.683329   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:32.693980   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:32.694050   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:32.708927   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:32.709008   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:32.719600   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:32.719676   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:32.730368   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:32.730446   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:32.741187   13969 logs.go:282] 0 containers: []
	W1030 11:33:32.741198   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:32.741270   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:32.751913   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:32.751931   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:32.751936   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:32.763802   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:32.763814   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:32.776822   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:32.776832   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:32.789311   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:32.789321   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:32.803368   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:32.803379   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:32.815158   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:32.815167   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:32.828542   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:32.828553   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:32.866826   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:32.866838   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:32.880982   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:32.880995   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:32.892921   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:32.892932   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:32.928975   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:32.928987   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:32.942201   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:32.942212   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:32.967377   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:32.967384   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:32.991065   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:32.991075   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:33.002543   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:33.002552   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:33.006866   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:33.006873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:33.021794   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:33.021803   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:35.539645   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:40.542290   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:40.542658   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:40.570920   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:40.571067   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:40.588828   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:40.588925   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:40.602475   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:40.602559   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:40.620567   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:40.620644   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:40.631087   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:40.631175   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:40.640919   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:40.640996   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:40.650575   13969 logs.go:282] 0 containers: []
	W1030 11:33:40.650587   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:40.650652   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:40.661424   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:40.661443   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:40.661449   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:40.679525   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:40.679535   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:40.715616   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:40.715627   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:40.729318   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:40.729331   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:40.740645   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:40.740657   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:40.752377   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:40.752390   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:40.775160   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:40.775169   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:40.811249   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:40.811259   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:40.824699   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:40.824708   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:40.836382   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:40.836395   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:40.847712   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:40.847722   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:40.860090   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:40.860100   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:40.871745   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:40.871754   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:40.889161   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:40.889172   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:40.900349   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:40.900358   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:40.904819   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:40.904828   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:40.918721   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:40.918731   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:43.432299   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:48.434544   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:48.435172   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:48.475583   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:48.475754   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:48.500367   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:48.500474   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:48.515489   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:48.515579   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:48.528056   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:48.528144   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:48.538748   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:48.538822   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:48.549366   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:48.549445   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:48.559301   13969 logs.go:282] 0 containers: []
	W1030 11:33:48.559313   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:48.559384   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:48.569964   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:48.569981   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:48.569985   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:48.582185   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:48.582197   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:48.594370   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:48.594385   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:48.606114   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:48.606127   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:48.620990   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:48.621000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:48.633450   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:48.633461   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:48.647232   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:48.647241   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:48.660714   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:48.660729   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:48.672686   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:48.672699   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:48.684536   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:48.684547   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:48.696235   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:48.696246   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:48.707927   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:48.707939   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:48.730201   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:48.730209   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:48.765380   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:48.765390   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:48.779731   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:48.779744   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:48.796911   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:48.796923   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:48.801820   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:48.801829   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:51.338529   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:56.341028   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:56.341659   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:56.390008   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:56.390160   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:56.408318   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:56.408422   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:56.421463   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:56.421545   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:56.432765   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:56.432853   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:56.443156   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:56.443232   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:56.453470   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:56.453547   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:56.464079   13969 logs.go:282] 0 containers: []
	W1030 11:33:56.464094   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:56.464161   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:56.479867   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:56.479882   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:56.479889   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:56.515366   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:56.515381   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:56.529929   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:56.529943   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:56.541590   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:56.541601   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:56.553313   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:56.553325   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:56.558041   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:56.558049   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:56.570124   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:56.570137   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:56.584593   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:56.584606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:56.596258   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:56.596270   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:56.618818   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:56.618824   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:56.630438   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:56.630449   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:56.641612   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:56.641625   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:56.653406   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:56.653417   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:56.670672   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:56.670682   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:56.682481   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:56.682491   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:56.719003   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:56.719010   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:56.732583   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:56.732593   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:59.246113   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:04.248650   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:04.249197   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:04.289192   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:04.289352   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:04.311466   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:04.311609   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:04.327437   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:04.327524   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:04.339752   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:04.339836   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:04.350894   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:04.350970   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:04.365041   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:04.365115   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:04.375250   13969 logs.go:282] 0 containers: []
	W1030 11:34:04.375261   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:04.375328   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:04.385611   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:04.385628   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:04.385632   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:04.400842   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:04.400855   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:04.420602   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:04.420614   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:04.432416   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:04.432429   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:04.455075   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:04.455082   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:04.459226   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:04.459235   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:04.471125   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:04.471137   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:04.482825   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:04.482840   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:04.495334   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:04.495345   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:04.507866   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:04.507877   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:04.519899   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:04.519913   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:04.557111   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:04.557121   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:04.574277   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:04.574289   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:04.586573   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:04.586584   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:04.597558   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:04.597569   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:04.632036   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:04.632047   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:04.645970   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:04.645981   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:07.159441   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:12.162290   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:12.162841   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:12.202458   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:12.202609   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:12.228997   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:12.229120   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:12.243388   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:12.243462   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:12.255172   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:12.255247   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:12.266115   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:12.266182   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:12.277332   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:12.277417   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:12.287715   13969 logs.go:282] 0 containers: []
	W1030 11:34:12.287729   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:12.287797   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:12.298724   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:12.298741   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:12.298746   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:12.320837   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:12.320846   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:12.332735   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:12.332745   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:12.344610   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:12.344620   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:12.355990   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:12.356002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:12.367692   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:12.367703   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:12.402299   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:12.402311   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:12.414584   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:12.414597   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:12.435989   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:12.436000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:12.449677   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:12.449687   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:12.486897   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:12.486905   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:12.491188   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:12.491194   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:12.505506   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:12.505517   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:12.523562   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:12.523573   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:12.536811   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:12.536820   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:12.551472   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:12.551482   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:12.563990   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:12.564000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:15.079295   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:20.081840   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:20.082372   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:20.119025   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:20.119178   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:20.139478   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:20.139585   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:20.154006   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:20.154093   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:20.166335   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:20.166418   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:20.177013   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:20.177081   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:20.187937   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:20.188021   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:20.197995   13969 logs.go:282] 0 containers: []
	W1030 11:34:20.198010   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:20.198080   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:20.208818   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:20.208835   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:20.208839   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:20.220294   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:20.220309   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:20.241476   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:20.241491   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:20.256928   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:20.256939   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:20.275300   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:20.275312   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:20.288849   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:20.288863   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:20.300101   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:20.300113   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:20.311907   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:20.311921   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:20.323560   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:20.323572   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:20.335049   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:20.335061   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:20.358297   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:20.358306   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:20.392703   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:20.392717   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:20.397212   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:20.397221   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:20.411924   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:20.411937   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:20.424274   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:20.424284   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:20.438127   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:20.438139   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:20.451818   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:20.451829   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:22.992810   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:27.994836   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:27.995013   13969 kubeadm.go:597] duration metric: took 4m4.828671417s to restartPrimaryControlPlane
	W1030 11:34:27.995203   13969 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 11:34:27.995271   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1030 11:34:28.986730   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 11:34:28.991592   13969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:34:28.994268   13969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:34:28.997444   13969 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 11:34:28.997450   13969 kubeadm.go:157] found existing configuration files:
	
	I1030 11:34:28.997481   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf
	I1030 11:34:29.000443   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 11:34:29.000473   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:34:29.003271   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf
	I1030 11:34:29.005672   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 11:34:29.005701   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:34:29.008692   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf
	I1030 11:34:29.011353   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 11:34:29.011383   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:34:29.013878   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf
	I1030 11:34:29.016869   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 11:34:29.016894   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:34:29.019349   13969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 11:34:29.037559   13969 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1030 11:34:29.037588   13969 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 11:34:29.090920   13969 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 11:34:29.090986   13969 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 11:34:29.091026   13969 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 11:34:29.140365   13969 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 11:34:29.143584   13969 out.go:235]   - Generating certificates and keys ...
	I1030 11:34:29.143617   13969 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 11:34:29.143652   13969 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 11:34:29.143689   13969 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 11:34:29.143724   13969 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 11:34:29.143764   13969 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 11:34:29.143792   13969 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 11:34:29.143824   13969 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 11:34:29.143855   13969 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 11:34:29.143898   13969 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 11:34:29.143952   13969 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 11:34:29.143997   13969 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 11:34:29.144040   13969 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 11:34:29.295297   13969 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 11:34:29.333690   13969 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 11:34:29.406546   13969 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 11:34:29.572293   13969 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 11:34:29.601206   13969 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 11:34:29.602582   13969 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 11:34:29.602607   13969 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 11:34:29.695066   13969 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 11:34:29.697975   13969 out.go:235]   - Booting up control plane ...
	I1030 11:34:29.698030   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 11:34:29.698067   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 11:34:29.698103   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 11:34:29.698145   13969 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 11:34:29.698226   13969 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 11:34:34.197372   13969 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502464 seconds
	I1030 11:34:34.197456   13969 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 11:34:34.203265   13969 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 11:34:34.732761   13969 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 11:34:34.733166   13969 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-135000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 11:34:35.238852   13969 kubeadm.go:310] [bootstrap-token] Using token: qxp74v.j30mnz0jwrgrduf8
	I1030 11:34:35.246254   13969 out.go:235]   - Configuring RBAC rules ...
	I1030 11:34:35.246330   13969 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 11:34:35.246382   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 11:34:35.248659   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 11:34:35.251885   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 11:34:35.253076   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 11:34:35.254175   13969 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 11:34:35.257725   13969 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 11:34:35.447062   13969 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 11:34:35.643259   13969 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 11:34:35.643763   13969 kubeadm.go:310] 
	I1030 11:34:35.643796   13969 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 11:34:35.643801   13969 kubeadm.go:310] 
	I1030 11:34:35.643839   13969 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 11:34:35.643845   13969 kubeadm.go:310] 
	I1030 11:34:35.643868   13969 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 11:34:35.643900   13969 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 11:34:35.643927   13969 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 11:34:35.643931   13969 kubeadm.go:310] 
	I1030 11:34:35.643963   13969 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 11:34:35.643968   13969 kubeadm.go:310] 
	I1030 11:34:35.643999   13969 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 11:34:35.644005   13969 kubeadm.go:310] 
	I1030 11:34:35.644043   13969 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 11:34:35.644098   13969 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 11:34:35.644149   13969 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 11:34:35.644154   13969 kubeadm.go:310] 
	I1030 11:34:35.644201   13969 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 11:34:35.644243   13969 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 11:34:35.644248   13969 kubeadm.go:310] 
	I1030 11:34:35.644295   13969 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qxp74v.j30mnz0jwrgrduf8 \
	I1030 11:34:35.644356   13969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 \
	I1030 11:34:35.644368   13969 kubeadm.go:310] 	--control-plane 
	I1030 11:34:35.644372   13969 kubeadm.go:310] 
	I1030 11:34:35.644412   13969 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 11:34:35.644421   13969 kubeadm.go:310] 
	I1030 11:34:35.644460   13969 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qxp74v.j30mnz0jwrgrduf8 \
	I1030 11:34:35.644522   13969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 
	I1030 11:34:35.644583   13969 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 11:34:35.644622   13969 cni.go:84] Creating CNI manager for ""
	I1030 11:34:35.644633   13969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:34:35.652247   13969 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 11:34:35.655366   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 11:34:35.658330   13969 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 11:34:35.663165   13969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 11:34:35.663216   13969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 11:34:35.663223   13969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-135000 minikube.k8s.io/updated_at=2024_10_30T11_34_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=running-upgrade-135000 minikube.k8s.io/primary=true
	I1030 11:34:35.704861   13969 kubeadm.go:1113] duration metric: took 41.687041ms to wait for elevateKubeSystemPrivileges
	I1030 11:34:35.704873   13969 ops.go:34] apiserver oom_adj: -16
	I1030 11:34:35.704941   13969 kubeadm.go:394] duration metric: took 4m12.554060125s to StartCluster
	I1030 11:34:35.704954   13969 settings.go:142] acquiring lock: {Name:mk1cee1df7de5eaabbeab12792d956523e6c9184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:34:35.705172   13969 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:34:35.705493   13969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:34:35.705694   13969 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:34:35.705746   13969 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 11:34:35.705777   13969 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-135000"
	I1030 11:34:35.705796   13969 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-135000"
	W1030 11:34:35.705799   13969 addons.go:243] addon storage-provisioner should already be in state true
	I1030 11:34:35.705814   13969 host.go:66] Checking if "running-upgrade-135000" exists ...
	I1030 11:34:35.705842   13969 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-135000"
	I1030 11:34:35.705852   13969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-135000"
	I1030 11:34:35.705889   13969 config.go:182] Loaded profile config "running-upgrade-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:34:35.706876   13969 kapi.go:59] client config for running-upgrade-135000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f8a7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:34:35.707232   13969 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-135000"
	W1030 11:34:35.707238   13969 addons.go:243] addon default-storageclass should already be in state true
	I1030 11:34:35.707245   13969 host.go:66] Checking if "running-upgrade-135000" exists ...
	I1030 11:34:35.709404   13969 out.go:177] * Verifying Kubernetes components...
	I1030 11:34:35.709766   13969 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 11:34:35.713476   13969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 11:34:35.713482   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:34:35.717277   13969 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:34:35.718530   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:34:35.722323   13969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:34:35.722329   13969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 11:34:35.722335   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:34:35.807145   13969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:34:35.812127   13969 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:34:35.812179   13969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:34:35.815893   13969 api_server.go:72] duration metric: took 110.189208ms to wait for apiserver process to appear ...
	I1030 11:34:35.815899   13969 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:34:35.815906   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:35.852586   13969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 11:34:35.867787   13969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:34:36.211390   13969 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 11:34:36.211403   13969 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 11:34:40.817679   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:40.817760   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:45.818422   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:45.818474   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:50.818995   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:50.819041   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:55.819801   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:55.819904   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:00.821212   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:00.821261   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:05.823129   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:05.823234   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1030 11:35:06.213823   13969 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1030 11:35:06.218455   13969 out.go:177] * Enabled addons: storage-provisioner
	I1030 11:35:06.229319   13969 addons.go:510] duration metric: took 30.523917666s for enable addons: enabled=[storage-provisioner]
	I1030 11:35:10.825540   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:10.825634   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:15.828326   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:15.828462   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:20.831096   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:20.831200   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:25.833210   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:25.833297   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:30.836079   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:30.836175   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:35.836981   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:35.837271   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:35.859751   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:35.859885   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:35.874657   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:35.874746   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:35.887091   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:35.887174   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:35.898030   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:35.898116   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:35.908861   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:35.908937   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:35.919041   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:35.919109   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:35.929616   13969 logs.go:282] 0 containers: []
	W1030 11:35:35.929628   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:35.929689   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:35.939892   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:35.939909   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:35.939914   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:35.951332   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:35.951342   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:35.962857   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:35.962870   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:36.002795   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:36.002807   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:36.017199   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:36.017210   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:36.029143   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:36.029153   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:36.051177   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:36.051194   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:36.072160   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:36.072170   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:36.090189   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:36.090201   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:36.113376   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:36.113386   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:36.146532   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:36.146542   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:36.150855   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:36.150867   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:36.164391   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:36.164400   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:38.681160   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:43.682860   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:43.682960   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:43.695384   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:43.695465   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:43.707682   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:43.707768   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:43.719269   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:43.719356   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:43.731420   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:43.731501   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:43.743974   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:43.744059   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:43.757676   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:43.757759   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:43.769040   13969 logs.go:282] 0 containers: []
	W1030 11:35:43.769052   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:43.769121   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:43.781081   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:43.781106   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:43.781111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:43.794720   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:43.794732   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:43.807383   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:43.807394   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:43.823157   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:43.823170   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:43.836558   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:43.836570   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:43.874009   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:43.874025   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:43.913862   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:43.913874   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:43.930066   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:43.930080   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:43.946539   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:43.946551   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:43.971313   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:43.971329   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:43.984106   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:43.984118   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:43.989109   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:43.989119   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:44.009314   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:44.009326   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:46.523850   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:51.526128   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:51.526232   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:51.538574   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:51.538664   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:51.554239   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:51.554318   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:51.564935   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:51.565014   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:51.575451   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:51.575531   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:51.586198   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:51.586284   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:51.597076   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:51.597150   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:51.607892   13969 logs.go:282] 0 containers: []
	W1030 11:35:51.607906   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:51.607975   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:51.618712   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:51.618727   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:51.618733   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:51.630998   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:51.631014   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:51.635526   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:51.635535   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:51.687061   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:51.687072   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:51.710574   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:51.710584   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:51.724827   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:51.724840   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:51.737283   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:51.737295   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:51.754562   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:51.754575   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:51.779086   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:51.779094   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:51.790517   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:51.790532   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:51.823774   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:51.823788   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:51.836961   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:51.836971   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:51.852286   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:51.852297   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:54.368718   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:59.371293   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:59.371474   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:59.390034   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:59.390123   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:59.400645   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:59.400714   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:59.411235   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:59.411305   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:59.421383   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:59.421449   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:59.431609   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:59.431693   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:59.446846   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:59.446919   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:59.456956   13969 logs.go:282] 0 containers: []
	W1030 11:35:59.456974   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:59.457034   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:59.467578   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:59.467591   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:59.467596   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:59.481392   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:59.481404   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:59.494783   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:59.494794   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:59.509324   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:59.509333   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:59.520857   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:59.520869   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:59.532799   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:59.532811   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:59.558500   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:59.558511   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:59.595632   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:59.595642   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:59.599920   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:59.599929   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:59.634723   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:59.634735   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:59.649326   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:59.649336   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:59.661540   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:59.661552   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:59.675665   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:59.675678   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:02.195751   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:07.197127   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:07.197296   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:07.208526   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:07.208612   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:07.221400   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:07.221485   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:07.234101   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:07.234183   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:07.246697   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:07.246786   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:07.258199   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:07.258281   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:07.268532   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:07.268610   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:07.278247   13969 logs.go:282] 0 containers: []
	W1030 11:36:07.278268   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:07.278337   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:07.289074   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:07.289090   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:07.289096   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:07.306880   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:07.306892   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:07.318638   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:07.318648   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:07.337713   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:07.337723   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:07.363708   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:07.363719   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:07.375822   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:07.375832   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:07.411076   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:07.411109   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:07.454343   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:07.454357   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:07.468754   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:07.468767   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:07.481851   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:07.481865   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:07.501065   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:07.501078   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:07.517679   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:07.517691   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:07.522601   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:07.522609   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:10.039977   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:15.042132   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:15.042345   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:15.063248   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:15.063369   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:15.077493   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:15.077574   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:15.089827   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:15.089900   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:15.101177   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:15.101255   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:15.112899   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:15.112981   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:15.124659   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:15.124731   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:15.135635   13969 logs.go:282] 0 containers: []
	W1030 11:36:15.135648   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:15.135706   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:15.148758   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:15.148780   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:15.148785   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:15.153484   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:15.153491   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:15.171860   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:15.171873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:15.194034   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:15.194046   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:15.217272   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:15.217283   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:15.250516   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:15.250526   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:15.285574   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:15.285585   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:15.300605   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:15.300616   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:15.311928   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:15.311939   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:15.323552   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:15.323562   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:15.340497   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:15.340507   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:15.352018   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:15.352028   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:15.369583   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:15.369593   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:17.884427   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:22.886832   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:22.887101   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:22.911419   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:22.911527   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:22.928220   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:22.928311   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:22.941030   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:22.941117   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:22.952296   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:22.952371   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:22.962486   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:22.962570   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:22.973320   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:22.973404   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:22.983616   13969 logs.go:282] 0 containers: []
	W1030 11:36:22.983631   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:22.983700   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:22.994426   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:22.994442   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:22.994448   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:23.006412   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:23.006424   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:23.023511   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:23.023522   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:23.048781   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:23.048793   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:23.084099   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:23.084112   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:23.089027   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:23.089036   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:23.104119   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:23.104130   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:23.118876   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:23.118887   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:23.130889   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:23.130900   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:23.143388   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:23.143402   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:23.167579   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:23.167595   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:23.179064   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:23.179077   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:23.212866   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:23.212877   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:25.726355   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:30.728200   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:30.728633   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:30.772474   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:30.772572   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:30.786287   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:30.786358   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:30.798181   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:30.798263   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:30.812396   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:30.812476   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:30.822707   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:30.822781   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:30.833126   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:30.833199   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:30.844189   13969 logs.go:282] 0 containers: []
	W1030 11:36:30.844201   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:30.844257   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:30.854551   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:30.854566   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:30.854571   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:30.878158   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:30.878166   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:30.889660   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:30.889671   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:30.905413   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:30.905423   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:30.919480   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:30.919494   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:30.931231   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:30.931244   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:30.943463   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:30.943476   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:30.963104   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:30.963114   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:30.974946   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:30.974961   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:31.009575   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:31.009583   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:31.014460   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:31.014467   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:31.050682   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:31.050696   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:31.063263   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:31.063278   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:33.579879   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:38.582039   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:38.582225   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:38.593350   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:38.593431   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:38.603957   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:38.604031   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:38.614857   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:38.614939   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:38.625636   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:38.625713   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:38.636153   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:38.636228   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:38.646603   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:38.646686   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:38.656954   13969 logs.go:282] 0 containers: []
	W1030 11:36:38.656965   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:38.657031   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:38.667552   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:38.667567   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:38.667573   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:38.681517   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:38.681529   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:38.693227   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:38.693237   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:38.707758   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:38.707772   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:38.719551   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:38.719560   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:38.736758   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:38.736777   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:38.760490   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:38.760498   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:38.793724   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:38.793732   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:38.808134   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:38.808145   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:38.820426   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:38.820437   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:38.832198   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:38.832210   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:38.843378   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:38.843392   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:38.847816   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:38.847824   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:41.385627   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:46.386013   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:46.386181   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:46.398942   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:46.399030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:46.409383   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:46.409462   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:46.421541   13969 logs.go:282] 3 containers: [d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:36:46.421621   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:46.432374   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:46.432456   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:46.442994   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:46.443075   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:46.454051   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:46.454131   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:46.463554   13969 logs.go:282] 0 containers: []
	W1030 11:36:46.463565   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:46.463626   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:46.474059   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:46.474075   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:46.474079   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:46.488367   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:46.488376   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:46.514294   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:46.514308   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:46.526397   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:46.526407   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:46.541864   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:46.541878   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:46.563456   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:46.563470   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:46.568120   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:46.568126   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:46.611592   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:46.611608   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:46.625885   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:36:46.625895   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:36:46.637650   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:46.637664   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:46.649982   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:46.649995   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:46.684291   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:46.684305   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:46.704054   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:46.704065   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:46.722739   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:46.722752   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:49.240964   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:54.243244   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:54.243416   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:54.259033   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:54.259130   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:54.275943   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:54.276030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:54.287350   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:36:54.287430   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:54.298359   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:54.298431   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:54.315005   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:54.315083   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:54.325699   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:54.325773   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:54.336262   13969 logs.go:282] 0 containers: []
	W1030 11:36:54.336275   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:54.336344   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:54.347335   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:54.347353   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:54.347358   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:54.359371   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:54.359383   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:54.384231   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:54.384238   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:54.418535   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:54.418545   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:54.422913   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:36:54.422920   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:36:54.434308   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:54.434321   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:54.445680   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:54.445692   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:54.460435   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:54.460445   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:54.472584   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:54.472609   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:54.511537   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:54.511547   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:54.525937   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:54.525946   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:54.541528   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:54.541543   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:54.563429   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:36:54.563439   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:36:54.576322   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:54.576335   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:54.587991   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:54.588002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:57.103596   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:02.105956   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:02.106178   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:02.126921   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:02.127037   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:02.146632   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:02.146715   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:02.158887   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:02.158976   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:02.169822   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:02.169900   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:02.180698   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:02.180769   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:02.191012   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:02.191080   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:02.208307   13969 logs.go:282] 0 containers: []
	W1030 11:37:02.208320   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:02.208390   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:02.218791   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:02.218811   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:02.218816   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:02.232844   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:02.232856   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:02.250991   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:02.251001   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:02.275735   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:02.275749   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:02.280237   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:02.280246   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:02.319515   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:02.319528   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:02.333694   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:02.333704   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:02.345491   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:02.345502   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:02.379074   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:02.379085   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:02.391664   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:02.391679   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:02.404412   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:02.404423   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:02.415982   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:02.415993   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:02.427611   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:02.427622   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:02.439637   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:02.439649   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:02.451079   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:02.451091   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:04.970299   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:09.972618   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:09.972788   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:09.988134   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:09.988235   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:10.001318   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:10.001400   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:10.017477   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:10.017563   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:10.028130   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:10.028208   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:10.038732   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:10.038805   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:10.049041   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:10.049123   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:10.059983   13969 logs.go:282] 0 containers: []
	W1030 11:37:10.059997   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:10.060060   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:10.070387   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:10.070404   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:10.070409   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:10.104597   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:10.104606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:10.119572   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:10.119587   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:10.144637   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:10.144645   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:10.157017   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:10.157027   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:10.190637   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:10.190645   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:10.201856   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:10.201867   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:10.220101   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:10.220111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:10.231798   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:10.231808   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:10.236160   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:10.236168   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:10.253323   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:10.253335   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:10.268687   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:10.268698   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:10.280231   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:10.280240   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:10.295599   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:10.295609   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:10.307168   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:10.307178   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:12.819176   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:17.820107   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:17.820252   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:17.834817   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:17.834912   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:17.847194   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:17.847273   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:17.858159   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:17.858236   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:17.869078   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:17.869159   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:17.883301   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:17.883383   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:17.894197   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:17.894281   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:17.904427   13969 logs.go:282] 0 containers: []
	W1030 11:37:17.904441   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:17.904512   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:17.920296   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:17.920313   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:17.920319   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:17.938108   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:17.938119   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:17.949863   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:17.949876   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:17.961433   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:17.961445   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:17.975690   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:17.975700   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:18.011589   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:18.011603   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:18.025504   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:18.025516   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:18.037682   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:18.037693   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:18.049415   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:18.049428   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:18.061132   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:18.061143   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:18.097039   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:18.097048   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:18.101542   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:18.101548   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:18.119458   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:18.119472   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:18.131992   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:18.132005   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:18.143487   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:18.143496   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:20.670392   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:25.672697   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:25.672896   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:25.696038   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:25.696139   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:25.709215   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:25.709299   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:25.720060   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:25.720139   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:25.730684   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:25.730773   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:25.741372   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:25.741449   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:25.752475   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:25.752547   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:25.765411   13969 logs.go:282] 0 containers: []
	W1030 11:37:25.765427   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:25.765493   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:25.775924   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:25.775940   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:25.775946   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:25.788414   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:25.788427   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:25.802991   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:25.803002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:25.822214   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:25.822229   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:25.835382   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:25.835392   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:25.860721   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:25.860735   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:25.896171   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:25.896182   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:25.909943   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:25.909956   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:25.930330   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:25.930343   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:25.941926   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:25.941935   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:25.962677   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:25.962689   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:25.974185   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:25.974194   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:25.979193   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:25.979203   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:26.018539   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:26.018552   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:26.030462   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:26.030472   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:28.545526   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:33.547926   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:33.548131   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:33.572135   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:33.572239   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:33.586942   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:33.587030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:33.599027   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:33.599108   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:33.610367   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:33.610440   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:33.621128   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:33.621210   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:33.631703   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:33.631783   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:33.642169   13969 logs.go:282] 0 containers: []
	W1030 11:37:33.642182   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:33.642245   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:33.652385   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:33.652401   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:33.652406   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:33.670029   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:33.670039   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:33.685427   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:33.685439   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:33.690287   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:33.690294   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:33.704317   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:33.704327   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:33.724210   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:33.724221   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:33.735978   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:33.735988   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:33.747900   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:33.747909   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:33.759012   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:33.759024   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:33.770715   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:33.770726   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:33.784889   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:33.784900   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:33.796556   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:33.796566   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:33.829775   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:33.829785   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:33.880063   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:33.880076   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:33.905018   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:33.905027   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:36.419433   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:41.421897   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:41.422284   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:41.453529   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:41.453664   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:41.472512   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:41.472622   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:41.487705   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:41.487798   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:41.501010   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:41.501081   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:41.511967   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:41.512050   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:41.522663   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:41.522744   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:41.547490   13969 logs.go:282] 0 containers: []
	W1030 11:37:41.547502   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:41.547567   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:41.557849   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:41.557866   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:41.557872   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:41.571944   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:41.571955   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:41.605399   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:41.605409   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:41.610048   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:41.610055   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:41.621819   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:41.621831   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:41.633810   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:41.633821   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:41.648804   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:41.648814   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:41.683521   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:41.683532   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:41.696231   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:41.696241   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:41.714031   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:41.714041   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:41.725632   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:41.725643   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:41.737253   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:41.737264   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:41.752149   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:41.752158   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:41.776387   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:41.776397   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:41.790216   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:41.790226   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:44.303219   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:49.305510   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:49.305686   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:49.330872   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:49.331006   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:49.348354   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:49.348454   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:49.361204   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:49.361291   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:49.372342   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:49.372421   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:49.382919   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:49.382989   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:49.393145   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:49.393214   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:49.402819   13969 logs.go:282] 0 containers: []
	W1030 11:37:49.402829   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:49.402887   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:49.418921   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:49.418937   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:49.418943   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:49.452249   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:49.452258   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:49.464102   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:49.464113   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:49.475695   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:49.475707   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:49.487928   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:49.487939   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:49.494244   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:49.494252   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:49.505693   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:49.505704   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:49.517720   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:49.517731   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:49.553742   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:49.553753   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:49.565426   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:49.565435   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:49.580220   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:49.580231   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:49.597919   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:49.597928   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:49.622993   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:49.623003   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:49.637430   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:49.637443   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:49.654582   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:49.654592   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:52.168179   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:57.170547   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:57.170952   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:57.207156   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:57.207313   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:57.228836   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:57.228947   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:57.244289   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:57.244392   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:57.257055   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:57.257132   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:57.267723   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:57.267803   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:57.278528   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:57.278608   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:57.288995   13969 logs.go:282] 0 containers: []
	W1030 11:37:57.289007   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:57.289069   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:57.299612   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:57.299631   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:57.299637   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:57.334158   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:57.334166   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:57.346131   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:57.346144   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:57.358566   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:57.358579   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:57.396383   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:57.396398   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:57.408226   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:57.408240   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:57.420032   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:57.420044   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:57.438259   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:57.438270   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:57.449725   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:57.449737   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:57.454233   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:57.454242   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:57.468813   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:57.468826   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:57.493934   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:57.493943   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:57.508619   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:57.508632   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:57.523317   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:57.523327   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:57.535263   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:57.535274   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:00.051117   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:05.053303   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:05.053509   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:05.070046   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:05.070153   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:05.082721   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:05.082805   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:05.094045   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:05.094138   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:05.105086   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:05.105167   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:05.117840   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:05.117926   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:05.128616   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:05.128697   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:05.138594   13969 logs.go:282] 0 containers: []
	W1030 11:38:05.138608   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:05.138672   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:05.149380   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:05.149396   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:05.149401   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:05.163313   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:05.163324   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:05.176609   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:05.176622   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:05.188054   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:05.188066   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:05.211677   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:05.211687   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:05.223584   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:05.223598   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:05.236397   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:05.236409   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:05.252574   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:05.252590   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:05.265848   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:05.265859   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:05.302218   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:05.302230   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:05.315003   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:05.315014   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:05.331856   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:05.331871   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:05.366554   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:05.366574   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:05.371686   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:05.371695   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:05.386126   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:05.386139   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:07.905594   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:12.907938   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:12.908208   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:12.933907   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:12.934021   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:12.951525   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:12.951626   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:12.965739   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:12.965919   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:12.978352   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:12.978432   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:12.989160   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:12.989228   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:13.000467   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:13.000543   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:13.011110   13969 logs.go:282] 0 containers: []
	W1030 11:38:13.011119   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:13.011184   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:13.021674   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:13.021689   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:13.021695   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:13.045666   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:13.045688   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:13.065847   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:13.065858   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:13.089392   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:13.089406   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:13.125180   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:13.125190   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:13.137567   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:13.137576   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:13.151501   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:13.151515   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:13.163471   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:13.163481   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:13.180955   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:13.180964   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:13.185840   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:13.185845   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:13.197487   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:13.197496   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:13.234160   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:13.234175   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:13.246592   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:13.246601   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:13.257801   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:13.257811   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:13.269841   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:13.269855   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:15.787676   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:20.790007   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:20.790315   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:20.816148   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:20.816280   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:20.833408   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:20.833485   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:20.846432   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:20.846516   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:20.857100   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:20.857175   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:20.867470   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:20.867552   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:20.878781   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:20.878855   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:20.890896   13969 logs.go:282] 0 containers: []
	W1030 11:38:20.890910   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:20.890978   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:20.902484   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:20.902500   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:20.902505   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:20.937443   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:20.937453   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:20.973500   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:20.973511   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:20.985896   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:20.985908   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:20.997492   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:20.997505   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:21.012012   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:21.012023   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:21.017007   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:21.017013   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:21.034295   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:21.034306   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:21.052667   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:21.052677   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:21.067026   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:21.067036   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:21.078571   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:21.078580   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:21.095720   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:21.095731   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:21.108110   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:21.108123   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:21.120008   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:21.120020   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:21.132221   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:21.132232   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:23.659573   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:28.661732   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:28.661858   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:28.672849   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:28.672934   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:28.683953   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:28.684036   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:28.694582   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:28.694666   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:28.705202   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:28.705273   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:28.715826   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:28.715901   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:28.726292   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:28.726370   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:28.736469   13969 logs.go:282] 0 containers: []
	W1030 11:38:28.736481   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:28.736542   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:28.747385   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:28.747400   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:28.747405   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:28.761781   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:28.761794   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:28.778863   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:28.778873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:28.791016   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:28.791025   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:28.825264   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:28.825273   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:28.836768   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:28.836780   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:28.849650   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:28.849659   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:28.861671   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:28.861681   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:28.886224   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:28.886236   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:28.898448   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:28.898458   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:28.917517   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:28.917526   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:28.934836   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:28.934847   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:28.946337   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:28.946347   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:28.950655   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:28.950661   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:28.984742   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:28.984752   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:31.498508   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:36.500800   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:36.503776   13969 out.go:201] 
	W1030 11:38:36.508750   13969 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1030 11:38:36.508756   13969 out.go:270] * 
	* 
	W1030 11:38:36.509189   13969 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:38:36.520797   13969 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-135000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-30 11:38:36.618584 -0700 PDT m=+1315.809410293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-135000 -n running-upgrade-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-135000 -n running-upgrade-135000: exit status 2 (15.71435675s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-135000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-269000          | force-systemd-flag-269000 | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-842000              | force-systemd-env-842000  | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-842000           | force-systemd-env-842000  | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT | 30 Oct 24 11:28 PDT |
	| start   | -p docker-flags-234000                | docker-flags-234000       | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-269000             | force-systemd-flag-269000 | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-269000          | force-systemd-flag-269000 | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT | 30 Oct 24 11:28 PDT |
	| start   | -p cert-expiration-493000             | cert-expiration-493000    | jenkins | v1.34.0 | 30 Oct 24 11:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-234000 ssh               | docker-flags-234000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-234000 ssh               | docker-flags-234000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-234000                | docker-flags-234000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT | 30 Oct 24 11:29 PDT |
	| start   | -p cert-options-978000                | cert-options-978000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-978000 ssh               | cert-options-978000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-978000 -- sudo        | cert-options-978000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-978000                | cert-options-978000       | jenkins | v1.34.0 | 30 Oct 24 11:29 PDT | 30 Oct 24 11:29 PDT |
	| start   | -p running-upgrade-135000             | minikube                  | jenkins | v1.26.0 | 30 Oct 24 11:29 PDT | 30 Oct 24 11:30 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-135000             | running-upgrade-135000    | jenkins | v1.34.0 | 30 Oct 24 11:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-493000             | cert-expiration-493000    | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-493000             | cert-expiration-493000    | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT | 30 Oct 24 11:32 PDT |
	| start   | -p kubernetes-upgrade-816000          | kubernetes-upgrade-816000 | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-816000          | kubernetes-upgrade-816000 | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT | 30 Oct 24 11:32 PDT |
	| start   | -p kubernetes-upgrade-816000          | kubernetes-upgrade-816000 | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-816000          | kubernetes-upgrade-816000 | jenkins | v1.34.0 | 30 Oct 24 11:32 PDT | 30 Oct 24 11:32 PDT |
	| start   | -p stopped-upgrade-877000             | minikube                  | jenkins | v1.26.0 | 30 Oct 24 11:32 PDT | 30 Oct 24 11:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-877000 stop           | minikube                  | jenkins | v1.26.0 | 30 Oct 24 11:33 PDT | 30 Oct 24 11:33 PDT |
	| start   | -p stopped-upgrade-877000             | stopped-upgrade-877000    | jenkins | v1.34.0 | 30 Oct 24 11:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 11:33:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 11:33:22.675643   14108 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:33:22.675840   14108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:33:22.675845   14108 out.go:358] Setting ErrFile to fd 2...
	I1030 11:33:22.675848   14108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:33:22.676010   14108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:33:22.677343   14108 out.go:352] Setting JSON to false
	I1030 11:33:22.698098   14108 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7373,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:33:22.698186   14108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:33:22.703421   14108 out.go:177] * [stopped-upgrade-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:33:22.711312   14108 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:33:22.711378   14108 notify.go:220] Checking for updates...
	I1030 11:33:22.718278   14108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:33:22.721347   14108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:33:22.725263   14108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:33:22.728290   14108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:33:22.731392   14108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:33:22.734605   14108 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:33:22.738237   14108 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 11:33:22.741291   14108 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:33:22.745273   14108 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:33:22.752307   14108 start.go:297] selected driver: qemu2
	I1030 11:33:22.752312   14108 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:33:22.752359   14108 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:33:22.754985   14108 cni.go:84] Creating CNI manager for ""
	I1030 11:33:22.755013   14108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:33:22.755031   14108 start.go:340] cluster config:
	{Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:33:22.755082   14108 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:33:22.763301   14108 out.go:177] * Starting "stopped-upgrade-877000" primary control-plane node in "stopped-upgrade-877000" cluster
	I1030 11:33:22.766271   14108 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:33:22.766286   14108 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1030 11:33:22.766294   14108 cache.go:56] Caching tarball of preloaded images
	I1030 11:33:22.766363   14108 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:33:22.766369   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1030 11:33:22.766422   14108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/config.json ...
	I1030 11:33:22.766759   14108 start.go:360] acquireMachinesLock for stopped-upgrade-877000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:33:22.766804   14108 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "stopped-upgrade-877000"
	I1030 11:33:22.766811   14108 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:33:22.766816   14108 fix.go:54] fixHost starting: 
	I1030 11:33:22.766936   14108 fix.go:112] recreateIfNeeded on stopped-upgrade-877000: state=Stopped err=<nil>
	W1030 11:33:22.766943   14108 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:33:22.774273   14108 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-877000" ...
	I1030 11:33:19.703877   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:22.778297   14108 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:33:22.778388   14108 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57382-:22,hostfwd=tcp::57383-:2376,hostname=stopped-upgrade-877000 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/disk.qcow2
	I1030 11:33:22.825269   14108 main.go:141] libmachine: STDOUT: 
	I1030 11:33:22.825301   14108 main.go:141] libmachine: STDERR: 
	I1030 11:33:22.825309   14108 main.go:141] libmachine: Waiting for VM to start (ssh -p 57382 docker@127.0.0.1)...
	I1030 11:33:24.705821   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:24.706586   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:24.746989   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:24.747140   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:24.769850   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:24.770012   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:24.785514   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:24.785598   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:24.798269   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:24.798340   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:24.808801   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:24.808880   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:24.819448   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:24.819522   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:24.829791   13969 logs.go:282] 0 containers: []
	W1030 11:33:24.829803   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:24.829862   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:24.840716   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:24.840735   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:24.840744   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:24.852691   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:24.852702   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:24.864205   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:24.864217   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:24.875668   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:24.875689   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:24.897034   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:24.897046   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:24.901410   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:24.901421   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:24.940055   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:24.940070   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:24.954564   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:24.954577   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:24.968507   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:24.968516   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:24.983102   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:24.983111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:25.000667   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:25.000680   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:25.011819   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:25.011828   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:25.024319   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:25.024327   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:25.063641   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:25.063651   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:25.075021   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:25.075030   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:25.098157   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:25.098171   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:25.128593   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:25.128606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:27.656453   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:32.659098   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:32.659315   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:32.671617   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:32.671706   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:32.683242   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:32.683329   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:32.693980   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:32.694050   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:32.708927   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:32.709008   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:32.719600   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:32.719676   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:32.730368   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:32.730446   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:32.741187   13969 logs.go:282] 0 containers: []
	W1030 11:33:32.741198   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:32.741270   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:32.751913   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:32.751931   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:32.751936   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:32.763802   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:32.763814   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:32.776822   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:32.776832   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:32.789311   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:32.789321   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:32.803368   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:32.803379   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:32.815158   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:32.815167   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:32.828542   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:32.828553   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:32.866826   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:32.866838   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:32.880982   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:32.880995   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:32.892921   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:32.892932   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:32.928975   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:32.928987   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:32.942201   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:32.942212   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:32.967377   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:32.967384   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:32.991065   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:32.991075   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:33.002543   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:33.002552   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:33.006866   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:33.006873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:33.021794   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:33.021803   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:35.539645   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:40.542290   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:40.542658   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:40.570920   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:40.571067   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:40.588828   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:40.588925   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:40.602475   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:40.602559   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:40.620567   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:40.620644   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:40.631087   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:40.631175   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:40.640919   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:40.640996   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:40.650575   13969 logs.go:282] 0 containers: []
	W1030 11:33:40.650587   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:40.650652   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:40.661424   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:40.661443   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:40.661449   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:40.679525   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:40.679535   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:40.715616   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:40.715627   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:40.729318   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:40.729331   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:40.740645   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:40.740657   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:40.752377   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:40.752390   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:40.775160   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:40.775169   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:40.811249   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:40.811259   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:40.824699   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:40.824708   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:40.836382   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:40.836395   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:40.847712   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:40.847722   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:40.860090   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:40.860100   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:40.871745   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:40.871754   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:40.889161   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:40.889172   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:40.900349   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:40.900358   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:40.904819   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:40.904828   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:40.918721   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:40.918731   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:42.944860   14108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/config.json ...
	I1030 11:33:42.945776   14108 machine.go:93] provisionDockerMachine start ...
	I1030 11:33:42.946024   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:42.946598   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:42.946615   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 11:33:43.031415   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 11:33:43.031442   14108 buildroot.go:166] provisioning hostname "stopped-upgrade-877000"
	I1030 11:33:43.031547   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.031753   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.031765   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-877000 && echo "stopped-upgrade-877000" | sudo tee /etc/hostname
	I1030 11:33:43.107352   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-877000
	
	I1030 11:33:43.107460   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.107624   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.107637   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-877000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-877000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-877000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 11:33:43.177237   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 11:33:43.177250   14108 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19883-11536/.minikube CaCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19883-11536/.minikube}
	I1030 11:33:43.177266   14108 buildroot.go:174] setting up certificates
	I1030 11:33:43.177271   14108 provision.go:84] configureAuth start
	I1030 11:33:43.177279   14108 provision.go:143] copyHostCerts
	I1030 11:33:43.177344   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem, removing ...
	I1030 11:33:43.177350   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem
	I1030 11:33:43.177462   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem (1082 bytes)
	I1030 11:33:43.177635   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem, removing ...
	I1030 11:33:43.177641   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem
	I1030 11:33:43.177694   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem (1123 bytes)
	I1030 11:33:43.177853   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem, removing ...
	I1030 11:33:43.177857   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem
	I1030 11:33:43.181017   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem (1675 bytes)
	I1030 11:33:43.181170   14108 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-877000 san=[127.0.0.1 localhost minikube stopped-upgrade-877000]
	I1030 11:33:43.241466   14108 provision.go:177] copyRemoteCerts
	I1030 11:33:43.241527   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 11:33:43.241536   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.275100   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 11:33:43.281976   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 11:33:43.289419   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 11:33:43.296584   14108 provision.go:87] duration metric: took 119.305209ms to configureAuth
	I1030 11:33:43.296593   14108 buildroot.go:189] setting minikube options for container-runtime
	I1030 11:33:43.296708   14108 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:33:43.296758   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.296850   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.296855   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1030 11:33:43.355322   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1030 11:33:43.355331   14108 buildroot.go:70] root file system type: tmpfs
	I1030 11:33:43.355386   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1030 11:33:43.355445   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.355559   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.355596   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1030 11:33:43.419302   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1030 11:33:43.419371   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.419491   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.419502   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1030 11:33:43.815058   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1030 11:33:43.815074   14108 machine.go:96] duration metric: took 869.297125ms to provisionDockerMachine
	I1030 11:33:43.815081   14108 start.go:293] postStartSetup for "stopped-upgrade-877000" (driver="qemu2")
	I1030 11:33:43.815087   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 11:33:43.815166   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 11:33:43.815178   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.846791   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 11:33:43.848112   14108 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 11:33:43.848120   14108 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/addons for local assets ...
	I1030 11:33:43.848202   14108 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/files for local assets ...
	I1030 11:33:43.848301   14108 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem -> 120432.pem in /etc/ssl/certs
	I1030 11:33:43.848423   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 11:33:43.850861   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:33:43.857687   14108 start.go:296] duration metric: took 42.601459ms for postStartSetup
	I1030 11:33:43.857699   14108 fix.go:56] duration metric: took 21.091132333s for fixHost
	I1030 11:33:43.857744   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.857847   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.857852   14108 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 11:33:43.917028   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313224.270688546
	
	I1030 11:33:43.917036   14108 fix.go:216] guest clock: 1730313224.270688546
	I1030 11:33:43.917040   14108 fix.go:229] Guest: 2024-10-30 11:33:44.270688546 -0700 PDT Remote: 2024-10-30 11:33:43.857701 -0700 PDT m=+21.216089376 (delta=412.987546ms)
	I1030 11:33:43.917051   14108 fix.go:200] guest clock delta is within tolerance: 412.987546ms
	I1030 11:33:43.917053   14108 start.go:83] releasing machines lock for "stopped-upgrade-877000", held for 21.15049425s
	I1030 11:33:43.917129   14108 ssh_runner.go:195] Run: cat /version.json
	I1030 11:33:43.917139   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.917130   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 11:33:43.917180   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	W1030 11:33:43.917640   14108 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:57533->127.0.0.1:57382: write: broken pipe
	I1030 11:33:43.917661   14108 retry.go:31] will retry after 226.223364ms: ssh: handshake failed: write tcp 127.0.0.1:57533->127.0.0.1:57382: write: broken pipe
	W1030 11:33:43.946493   14108 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1030 11:33:43.946538   14108 ssh_runner.go:195] Run: systemctl --version
	I1030 11:33:43.948293   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 11:33:43.949865   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 11:33:43.949901   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1030 11:33:43.953260   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1030 11:33:43.957866   14108 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 11:33:43.957876   14108 start.go:495] detecting cgroup driver to use...
	I1030 11:33:43.957954   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:33:43.964913   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1030 11:33:43.968211   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1030 11:33:43.970998   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1030 11:33:43.971027   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1030 11:33:43.973929   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:33:43.977276   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1030 11:33:43.980697   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:33:43.983633   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 11:33:43.986501   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1030 11:33:43.989395   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1030 11:33:43.992593   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1030 11:33:43.995394   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 11:33:43.997939   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 11:33:44.001087   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:33:44.073345   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1030 11:33:44.079622   14108 start.go:495] detecting cgroup driver to use...
	I1030 11:33:44.079701   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1030 11:33:44.085184   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:33:44.090492   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 11:33:44.100400   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:33:44.104636   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1030 11:33:44.109241   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1030 11:33:44.152002   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1030 11:33:44.156802   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:33:44.162365   14108 ssh_runner.go:195] Run: which cri-dockerd
	I1030 11:33:44.163636   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1030 11:33:44.166127   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1030 11:33:44.170967   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1030 11:33:44.253112   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1030 11:33:44.326795   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1030 11:33:44.326856   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1030 11:33:44.331935   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:33:44.407980   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:33:44.515517   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1030 11:33:44.520353   14108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1030 11:33:43.432299   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:48.434544   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:48.435172   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:48.475583   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:48.475754   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:48.500367   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:48.500474   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:48.515489   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:48.515579   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:48.528056   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:48.528144   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:48.538748   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:48.538822   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:48.549366   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:48.549445   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:48.559301   13969 logs.go:282] 0 containers: []
	W1030 11:33:48.559313   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:48.559384   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:48.569964   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:48.569981   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:48.569985   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:48.582185   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:48.582197   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:48.594370   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:48.594385   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:48.606114   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:48.606127   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:48.620990   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:48.621000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:48.633450   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:48.633461   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:48.647232   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:48.647241   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:48.660714   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:48.660729   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:48.672686   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:48.672699   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:48.684536   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:48.684547   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:48.696235   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:48.696246   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:48.707927   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:48.707939   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:48.730201   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:48.730209   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:48.765380   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:48.765390   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:48.779731   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:48.779744   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:48.796911   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:48.796923   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:48.801820   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:48.801829   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:51.338529   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:33:56.341028   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:33:56.341659   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:33:56.390008   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:33:56.390160   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:33:56.408318   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:33:56.408422   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:33:56.421463   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:33:56.421545   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:33:56.432765   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:33:56.432853   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:33:56.443156   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:33:56.443232   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:33:56.453470   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:33:56.453547   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:33:56.464079   13969 logs.go:282] 0 containers: []
	W1030 11:33:56.464094   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:33:56.464161   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:33:56.479867   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:33:56.479882   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:33:56.479889   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:33:56.515366   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:33:56.515381   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:33:56.529929   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:33:56.529943   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:33:56.541590   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:33:56.541601   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:33:56.553313   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:33:56.553325   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:33:56.558041   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:33:56.558049   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:33:56.570124   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:33:56.570137   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:33:56.584593   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:33:56.584606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:33:56.596258   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:33:56.596270   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:33:56.618818   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:33:56.618824   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:33:56.630438   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:33:56.630449   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:33:56.641612   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:33:56.641625   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:33:56.653406   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:33:56.653417   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:33:56.670672   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:33:56.670682   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:33:56.682481   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:33:56.682491   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:33:56.719003   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:33:56.719010   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:33:56.732583   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:33:56.732593   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:33:59.246113   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:04.248650   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:04.249197   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:04.289192   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:04.289352   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:04.311466   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:04.311609   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:04.327437   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:04.327524   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:04.339752   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:04.339836   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:04.350894   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:04.350970   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:04.365041   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:04.365115   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:04.375250   13969 logs.go:282] 0 containers: []
	W1030 11:34:04.375261   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:04.375328   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:04.385611   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:04.385628   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:04.385632   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:04.400842   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:04.400855   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:04.420602   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:04.420614   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:04.432416   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:04.432429   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:04.455075   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:04.455082   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:04.459226   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:04.459235   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:04.471125   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:04.471137   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:04.482825   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:04.482840   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:04.495334   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:04.495345   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:04.507866   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:04.507877   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:04.519899   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:04.519913   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:04.557111   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:04.557121   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:04.574277   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:04.574289   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:04.586573   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:04.586584   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:04.597558   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:04.597569   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:04.632036   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:04.632047   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:04.645970   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:04.645981   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:07.159441   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:12.162290   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:12.162841   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:12.202458   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:12.202609   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:12.228997   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:12.229120   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:12.243388   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:12.243462   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:12.255172   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:12.255247   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:12.266115   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:12.266182   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:12.277332   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:12.277417   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:12.287715   13969 logs.go:282] 0 containers: []
	W1030 11:34:12.287729   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:12.287797   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:12.298724   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:12.298741   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:12.298746   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:12.320837   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:12.320846   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:12.332735   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:12.332745   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:12.344610   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:12.344620   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:12.355990   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:12.356002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:12.367692   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:12.367703   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:12.402299   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:12.402311   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:12.414584   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:12.414597   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:12.435989   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:12.436000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:12.449677   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:12.449687   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:12.486897   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:12.486905   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:12.491188   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:12.491194   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:12.505506   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:12.505517   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:12.523562   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:12.523573   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:12.536811   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:12.536820   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:12.551472   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:12.551482   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:12.563990   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:12.564000   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:15.079295   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:20.081840   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:20.082372   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:34:20.119025   13969 logs.go:282] 2 containers: [1c8435217462 44235af5404a]
	I1030 11:34:20.119178   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:34:20.139478   13969 logs.go:282] 2 containers: [a5b7a9218dcb a3241ffb7a74]
	I1030 11:34:20.139585   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:34:20.154006   13969 logs.go:282] 1 containers: [c5b87e39c5cc]
	I1030 11:34:20.154093   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:34:20.166335   13969 logs.go:282] 2 containers: [3bc6af51504c 7432793e6ec0]
	I1030 11:34:20.166418   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:34:20.177013   13969 logs.go:282] 1 containers: [f99ebd62d082]
	I1030 11:34:20.177081   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:34:20.187937   13969 logs.go:282] 2 containers: [c120d13908c0 49760f7fb011]
	I1030 11:34:20.188021   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:34:20.197995   13969 logs.go:282] 0 containers: []
	W1030 11:34:20.198010   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:34:20.198080   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:34:20.208818   13969 logs.go:282] 2 containers: [fc8f6871f300 aa0fee49eeee]
	I1030 11:34:20.208835   13969 logs.go:123] Gathering logs for kube-controller-manager [49760f7fb011] ...
	I1030 11:34:20.208839   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49760f7fb011"
	I1030 11:34:20.220294   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:34:20.220309   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:34:20.241476   13969 logs.go:123] Gathering logs for coredns [c5b87e39c5cc] ...
	I1030 11:34:20.241491   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b87e39c5cc"
	I1030 11:34:20.256928   13969 logs.go:123] Gathering logs for kube-controller-manager [c120d13908c0] ...
	I1030 11:34:20.256939   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c120d13908c0"
	I1030 11:34:20.275300   13969 logs.go:123] Gathering logs for kube-scheduler [3bc6af51504c] ...
	I1030 11:34:20.275312   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bc6af51504c"
	I1030 11:34:20.288849   13969 logs.go:123] Gathering logs for kube-scheduler [7432793e6ec0] ...
	I1030 11:34:20.288863   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7432793e6ec0"
	I1030 11:34:20.300101   13969 logs.go:123] Gathering logs for kube-proxy [f99ebd62d082] ...
	I1030 11:34:20.300113   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f99ebd62d082"
	I1030 11:34:20.311907   13969 logs.go:123] Gathering logs for storage-provisioner [fc8f6871f300] ...
	I1030 11:34:20.311921   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc8f6871f300"
	I1030 11:34:20.323560   13969 logs.go:123] Gathering logs for storage-provisioner [aa0fee49eeee] ...
	I1030 11:34:20.323572   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa0fee49eeee"
	I1030 11:34:20.335049   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:34:20.335061   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:34:20.358297   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:34:20.358306   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:34:20.392703   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:34:20.392717   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:34:20.397212   13969 logs.go:123] Gathering logs for kube-apiserver [1c8435217462] ...
	I1030 11:34:20.397221   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c8435217462"
	I1030 11:34:20.411924   13969 logs.go:123] Gathering logs for kube-apiserver [44235af5404a] ...
	I1030 11:34:20.411937   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44235af5404a"
	I1030 11:34:20.424274   13969 logs.go:123] Gathering logs for etcd [a5b7a9218dcb] ...
	I1030 11:34:20.424284   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b7a9218dcb"
	I1030 11:34:20.438127   13969 logs.go:123] Gathering logs for etcd [a3241ffb7a74] ...
	I1030 11:34:20.438139   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3241ffb7a74"
	I1030 11:34:20.451818   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:34:20.451829   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:34:22.992810   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:27.994836   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:27.995013   13969 kubeadm.go:597] duration metric: took 4m4.828671417s to restartPrimaryControlPlane
	W1030 11:34:27.995203   13969 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 11:34:27.995271   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1030 11:34:28.986730   13969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 11:34:28.991592   13969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:34:28.994268   13969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:34:28.997444   13969 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 11:34:28.997450   13969 kubeadm.go:157] found existing configuration files:
	
	I1030 11:34:28.997481   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf
	I1030 11:34:29.000443   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 11:34:29.000473   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:34:29.003271   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf
	I1030 11:34:29.005672   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 11:34:29.005701   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:34:29.008692   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf
	I1030 11:34:29.011353   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 11:34:29.011383   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:34:29.013878   13969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf
	I1030 11:34:29.016869   13969 kubeadm.go:163] "https://control-plane.minikube.internal:57199" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57199 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 11:34:29.016894   13969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:34:29.019349   13969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 11:34:29.037559   13969 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1030 11:34:29.037588   13969 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 11:34:29.090920   13969 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 11:34:29.090986   13969 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 11:34:29.091026   13969 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 11:34:29.140365   13969 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 11:34:29.143584   13969 out.go:235]   - Generating certificates and keys ...
	I1030 11:34:29.143617   13969 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 11:34:29.143652   13969 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 11:34:29.143689   13969 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 11:34:29.143724   13969 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 11:34:29.143764   13969 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 11:34:29.143792   13969 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 11:34:29.143824   13969 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 11:34:29.143855   13969 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 11:34:29.143898   13969 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 11:34:29.143952   13969 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 11:34:29.143997   13969 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 11:34:29.144040   13969 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 11:34:29.295297   13969 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 11:34:29.333690   13969 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 11:34:29.406546   13969 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 11:34:29.572293   13969 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 11:34:29.601206   13969 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 11:34:29.602582   13969 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 11:34:29.602607   13969 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 11:34:29.695066   13969 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 11:34:29.697975   13969 out.go:235]   - Booting up control plane ...
	I1030 11:34:29.698030   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 11:34:29.698067   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 11:34:29.698103   13969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 11:34:29.698145   13969 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 11:34:29.698226   13969 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 11:34:34.197372   13969 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502464 seconds
	I1030 11:34:34.197456   13969 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 11:34:34.203265   13969 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 11:34:34.732761   13969 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 11:34:34.733166   13969 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-135000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 11:34:35.238852   13969 kubeadm.go:310] [bootstrap-token] Using token: qxp74v.j30mnz0jwrgrduf8
	I1030 11:34:35.246254   13969 out.go:235]   - Configuring RBAC rules ...
	I1030 11:34:35.246330   13969 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 11:34:35.246382   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 11:34:35.248659   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 11:34:35.251885   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 11:34:35.253076   13969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 11:34:35.254175   13969 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 11:34:35.257725   13969 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 11:34:35.447062   13969 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 11:34:35.643259   13969 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 11:34:35.643763   13969 kubeadm.go:310] 
	I1030 11:34:35.643796   13969 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 11:34:35.643801   13969 kubeadm.go:310] 
	I1030 11:34:35.643839   13969 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 11:34:35.643845   13969 kubeadm.go:310] 
	I1030 11:34:35.643868   13969 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 11:34:35.643900   13969 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 11:34:35.643927   13969 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 11:34:35.643931   13969 kubeadm.go:310] 
	I1030 11:34:35.643963   13969 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 11:34:35.643968   13969 kubeadm.go:310] 
	I1030 11:34:35.643999   13969 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 11:34:35.644005   13969 kubeadm.go:310] 
	I1030 11:34:35.644043   13969 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 11:34:35.644098   13969 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 11:34:35.644149   13969 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 11:34:35.644154   13969 kubeadm.go:310] 
	I1030 11:34:35.644201   13969 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 11:34:35.644243   13969 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 11:34:35.644248   13969 kubeadm.go:310] 
	I1030 11:34:35.644295   13969 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qxp74v.j30mnz0jwrgrduf8 \
	I1030 11:34:35.644356   13969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 \
	I1030 11:34:35.644368   13969 kubeadm.go:310] 	--control-plane 
	I1030 11:34:35.644372   13969 kubeadm.go:310] 
	I1030 11:34:35.644412   13969 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 11:34:35.644421   13969 kubeadm.go:310] 
	I1030 11:34:35.644460   13969 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qxp74v.j30mnz0jwrgrduf8 \
	I1030 11:34:35.644522   13969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 
	I1030 11:34:35.644583   13969 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 11:34:35.644622   13969 cni.go:84] Creating CNI manager for ""
	I1030 11:34:35.644633   13969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:34:35.652247   13969 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 11:34:35.655366   13969 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 11:34:35.658330   13969 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 11:34:35.663165   13969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 11:34:35.663216   13969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 11:34:35.663223   13969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-135000 minikube.k8s.io/updated_at=2024_10_30T11_34_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=running-upgrade-135000 minikube.k8s.io/primary=true
	I1030 11:34:35.704861   13969 kubeadm.go:1113] duration metric: took 41.687041ms to wait for elevateKubeSystemPrivileges
	I1030 11:34:35.704873   13969 ops.go:34] apiserver oom_adj: -16
	I1030 11:34:35.704941   13969 kubeadm.go:394] duration metric: took 4m12.554060125s to StartCluster
	I1030 11:34:35.704954   13969 settings.go:142] acquiring lock: {Name:mk1cee1df7de5eaabbeab12792d956523e6c9184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:34:35.705172   13969 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:34:35.705493   13969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:34:35.705694   13969 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:34:35.705746   13969 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 11:34:35.705777   13969 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-135000"
	I1030 11:34:35.705796   13969 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-135000"
	W1030 11:34:35.705799   13969 addons.go:243] addon storage-provisioner should already be in state true
	I1030 11:34:35.705814   13969 host.go:66] Checking if "running-upgrade-135000" exists ...
	I1030 11:34:35.705842   13969 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-135000"
	I1030 11:34:35.705852   13969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-135000"
	I1030 11:34:35.705889   13969 config.go:182] Loaded profile config "running-upgrade-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:34:35.706876   13969 kapi.go:59] client config for running-upgrade-135000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/running-upgrade-135000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f8a7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:34:35.707232   13969 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-135000"
	W1030 11:34:35.707238   13969 addons.go:243] addon default-storageclass should already be in state true
	I1030 11:34:35.707245   13969 host.go:66] Checking if "running-upgrade-135000" exists ...
	I1030 11:34:35.709404   13969 out.go:177] * Verifying Kubernetes components...
	I1030 11:34:35.709766   13969 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 11:34:35.713476   13969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 11:34:35.713482   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:34:35.717277   13969 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:34:35.718530   13969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:34:35.722323   13969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:34:35.722329   13969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 11:34:35.722335   13969 sshutil.go:53] new ssh client: &{IP:localhost Port:57167 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/running-upgrade-135000/id_rsa Username:docker}
	I1030 11:34:35.807145   13969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:34:35.812127   13969 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:34:35.812179   13969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:34:35.815893   13969 api_server.go:72] duration metric: took 110.189208ms to wait for apiserver process to appear ...
	I1030 11:34:35.815899   13969 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:34:35.815906   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:35.852586   13969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 11:34:35.867787   13969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:34:36.211390   13969 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 11:34:36.211403   13969 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 11:34:40.817679   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:40.817760   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:45.818422   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:45.818474   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:50.818995   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:50.819041   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:34:55.819801   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:34:55.819904   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:00.821212   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:00.821261   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:05.823129   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:05.823234   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1030 11:35:06.213823   13969 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1030 11:35:06.218455   13969 out.go:177] * Enabled addons: storage-provisioner
	I1030 11:35:06.229319   13969 addons.go:510] duration metric: took 30.523917666s for enable addons: enabled=[storage-provisioner]
	I1030 11:35:10.825540   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:10.825634   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:15.828326   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:15.828462   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:20.831096   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:20.831200   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:25.833210   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:25.833297   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:30.836079   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:30.836175   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:35.836981   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:35.837271   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:35.859751   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:35.859885   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:35.874657   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:35.874746   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:35.887091   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:35.887174   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:35.898030   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:35.898116   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:35.908861   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:35.908937   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:35.919041   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:35.919109   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:35.929616   13969 logs.go:282] 0 containers: []
	W1030 11:35:35.929628   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:35.929689   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:35.939892   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:35.939909   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:35.939914   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:35.951332   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:35.951342   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:35.962857   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:35.962870   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:36.002795   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:36.002807   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:36.017199   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:36.017210   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:36.029143   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:36.029153   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:36.051177   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:36.051194   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:36.072160   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:36.072170   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:36.090189   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:36.090201   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:36.113376   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:36.113386   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:36.146532   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:36.146542   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:36.150855   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:36.150867   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:36.164391   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:36.164400   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:38.681160   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:43.011379   14108 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m58.4923655s)
	I1030 11:35:43.011540   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:35:43.021975   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1030 11:35:43.094762   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1030 11:35:43.168936   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:43.238342   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1030 11:35:43.245152   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:35:43.249892   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:43.326353   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1030 11:35:43.365515   14108 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1030 11:35:43.365611   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1030 11:35:43.368622   14108 start.go:563] Will wait 60s for crictl version
	I1030 11:35:43.368685   14108 ssh_runner.go:195] Run: which crictl
	I1030 11:35:43.370168   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 11:35:43.385774   14108 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1030 11:35:43.385858   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:35:43.403285   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:35:43.425582   14108 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1030 11:35:43.425672   14108 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1030 11:35:43.427142   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 11:35:43.431306   14108 kubeadm.go:883] updating cluster {Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1030 11:35:43.431352   14108 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:35:43.431404   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:35:43.442137   14108 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:35:43.442157   14108 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:35:43.442217   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:35:43.445454   14108 ssh_runner.go:195] Run: which lz4
	I1030 11:35:43.446668   14108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 11:35:43.447864   14108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 11:35:43.447875   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1030 11:35:44.448087   14108 docker.go:653] duration metric: took 1.001469833s to copy over tarball
	I1030 11:35:44.448175   14108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 11:35:45.636630   14108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188454792s)
	I1030 11:35:45.636644   14108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 11:35:45.652449   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:35:45.655787   14108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1030 11:35:45.661202   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:45.738003   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:35:47.286314   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548312458s)
	I1030 11:35:47.286426   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:35:47.301434   14108 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:35:47.301443   14108 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:35:47.301447   14108 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 11:35:47.306048   14108 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:47.308009   14108 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.310036   14108 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.310438   14108 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:47.311961   14108 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.312105   14108 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.313748   14108 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.313804   14108 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.315053   14108 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1030 11:35:47.315156   14108 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.316133   14108 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.316741   14108 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:47.317284   14108 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:47.317654   14108 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1030 11:35:47.318930   14108 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:47.319485   14108 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:43.682860   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:43.682960   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:43.695384   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:43.695465   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:43.707682   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:43.707768   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:43.719269   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:43.719356   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:43.731420   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:43.731501   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:43.743974   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:43.744059   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:43.757676   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:43.757759   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:43.769040   13969 logs.go:282] 0 containers: []
	W1030 11:35:43.769052   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:43.769121   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:43.781081   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:43.781106   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:43.781111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:43.794720   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:43.794732   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:43.807383   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:43.807394   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:43.823157   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:43.823170   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:43.836558   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:43.836570   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:43.874009   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:43.874025   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:43.913862   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:43.913874   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:43.930066   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:43.930080   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:43.946539   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:43.946551   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:43.971313   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:43.971329   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:43.984106   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:43.984118   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:43.989109   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:43.989119   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:44.009314   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:44.009326   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:46.523850   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:47.878942   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.889928   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.890306   14108 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1030 11:35:47.890333   14108 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.890364   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.907355   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1030 11:35:47.907484   14108 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1030 11:35:47.907503   14108 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.907552   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.918592   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1030 11:35:47.935700   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.947568   14108 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1030 11:35:47.947597   14108 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.947635   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.960483   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1030 11:35:47.967891   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.978857   14108 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1030 11:35:47.978882   14108 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.978963   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.993965   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1030 11:35:48.033135   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1030 11:35:48.044167   14108 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1030 11:35:48.044187   14108 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1030 11:35:48.044255   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1030 11:35:48.054386   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1030 11:35:48.054525   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1030 11:35:48.056183   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1030 11:35:48.056195   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1030 11:35:48.064011   14108 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1030 11:35:48.064022   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1030 11:35:48.093327   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1030 11:35:48.138526   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.149260   14108 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1030 11:35:48.149287   14108 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.149363   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.160706   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1030 11:35:48.190082   14108 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1030 11:35:48.190260   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.200560   14108 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1030 11:35:48.200582   14108 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.200650   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.210660   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1030 11:35:48.210808   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:35:48.212213   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1030 11:35:48.212223   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1030 11:35:48.253046   14108 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:35:48.253059   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1030 11:35:48.292956   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1030 11:35:48.320101   14108 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1030 11:35:48.320214   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.331090   14108 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1030 11:35:48.331113   14108 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.331177   14108 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.351253   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 11:35:48.351400   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:35:48.352793   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1030 11:35:48.352805   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1030 11:35:48.387602   14108 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:35:48.387618   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1030 11:35:48.620054   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 11:35:48.620092   14108 cache_images.go:92] duration metric: took 1.318653834s to LoadCachedImages
	W1030 11:35:48.620128   14108 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1030 11:35:48.620139   14108 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1030 11:35:48.620198   14108 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-877000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 11:35:48.620285   14108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1030 11:35:48.633959   14108 cni.go:84] Creating CNI manager for ""
	I1030 11:35:48.633971   14108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:35:48.633977   14108 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 11:35:48.633988   14108 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-877000 NodeName:stopped-upgrade-877000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 11:35:48.634064   14108 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-877000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 11:35:48.634122   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1030 11:35:48.637091   14108 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 11:35:48.637132   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 11:35:48.639778   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1030 11:35:48.644785   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 11:35:48.649563   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1030 11:35:48.654775   14108 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1030 11:35:48.655895   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 11:35:48.659962   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:48.736241   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:35:48.742861   14108 certs.go:68] Setting up /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000 for IP: 10.0.2.15
	I1030 11:35:48.742871   14108 certs.go:194] generating shared ca certs ...
	I1030 11:35:48.742879   14108 certs.go:226] acquiring lock for ca certs: {Name:mke98b939cb7b412ec11c6499518b74392aa286f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.743093   14108 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key
	I1030 11:35:48.743859   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key
	I1030 11:35:48.743870   14108 certs.go:256] generating profile certs ...
	I1030 11:35:48.744127   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.key
	I1030 11:35:48.744146   14108 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3
	I1030 11:35:48.744160   14108 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1030 11:35:48.860024   14108 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 ...
	I1030 11:35:48.860039   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3: {Name:mk8dc9c9d5df0b51eafee344383b82637dfd5adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.860450   14108 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3 ...
	I1030 11:35:48.860458   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3: {Name:mkceb498d88f05e1cbeff333e74974ee13f252ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.860643   14108 certs.go:381] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt
	I1030 11:35:48.862777   14108 certs.go:385] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key
	I1030 11:35:48.863134   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.key
	I1030 11:35:48.863300   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem (1338 bytes)
	W1030 11:35:48.863523   14108 certs.go:480] ignoring /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043_empty.pem, impossibly tiny 0 bytes
	I1030 11:35:48.863528   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem (1675 bytes)
	I1030 11:35:48.863561   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem (1082 bytes)
	I1030 11:35:48.863597   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem (1123 bytes)
	I1030 11:35:48.863628   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem (1675 bytes)
	I1030 11:35:48.863694   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:35:48.864051   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 11:35:48.871487   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 11:35:48.878847   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 11:35:48.885796   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 11:35:48.892579   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 11:35:48.899575   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 11:35:48.907211   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 11:35:48.914929   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 11:35:48.922299   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 11:35:48.929110   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem --> /usr/share/ca-certificates/12043.pem (1338 bytes)
	I1030 11:35:48.935768   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /usr/share/ca-certificates/120432.pem (1708 bytes)
	I1030 11:35:48.943263   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 11:35:48.948828   14108 ssh_runner.go:195] Run: openssl version
	I1030 11:35:48.950823   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 11:35:48.953947   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.955424   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.955449   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.957300   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 11:35:48.960147   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12043.pem && ln -fs /usr/share/ca-certificates/12043.pem /etc/ssl/certs/12043.pem"
	I1030 11:35:48.963562   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.965317   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:17 /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.965346   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.967083   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12043.pem /etc/ssl/certs/51391683.0"
	I1030 11:35:48.970181   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120432.pem && ln -fs /usr/share/ca-certificates/120432.pem /etc/ssl/certs/120432.pem"
	I1030 11:35:48.973157   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.974601   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:17 /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.974624   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.976499   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/120432.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 11:35:48.979837   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 11:35:48.981222   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 11:35:48.983329   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 11:35:48.985274   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 11:35:48.987099   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 11:35:48.988854   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 11:35:48.990580   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 11:35:48.992465   14108 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:35:48.992545   14108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:35:49.006363   14108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 11:35:49.009747   14108 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 11:35:49.009759   14108 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 11:35:49.009794   14108 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 11:35:49.012652   14108 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:35:49.012963   14108 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-877000" does not appear in /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:35:49.013086   14108 kubeconfig.go:62] /Users/jenkins/minikube-integration/19883-11536/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-877000" cluster setting kubeconfig missing "stopped-upgrade-877000" context setting]
	I1030 11:35:49.013286   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:49.013722   14108 kapi.go:59] client config for stopped-upgrade-877000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10245e7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:35:49.014213   14108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 11:35:49.016989   14108 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-877000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1030 11:35:49.016994   14108 kubeadm.go:1160] stopping kube-system containers ...
	I1030 11:35:49.017046   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:35:49.027974   14108 docker.go:483] Stopping containers: [7b1ffc1f1881 d6a9e90789a1 74c76d98b1d5 9e4f9a6580ee ea0de2881762 4e35759a58bf 647d7c652201 f0309de3b673]
	I1030 11:35:49.028051   14108 ssh_runner.go:195] Run: docker stop 7b1ffc1f1881 d6a9e90789a1 74c76d98b1d5 9e4f9a6580ee ea0de2881762 4e35759a58bf 647d7c652201 f0309de3b673
	I1030 11:35:49.038979   14108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 11:35:49.044998   14108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:35:49.047892   14108 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 11:35:49.047902   14108 kubeadm.go:157] found existing configuration files:
	
	I1030 11:35:49.047932   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf
	I1030 11:35:49.050823   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 11:35:49.050855   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:35:49.053578   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf
	I1030 11:35:49.056138   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 11:35:49.056166   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:35:49.059259   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf
	I1030 11:35:49.061983   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 11:35:49.062012   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:35:49.064601   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf
	I1030 11:35:49.067548   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 11:35:49.067575   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:35:49.070760   14108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:35:49.073605   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.098100   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.620863   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.745773   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.780996   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.803810   14108 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:35:49.803911   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.305059   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.805945   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.810448   14108 api_server.go:72] duration metric: took 1.006649417s to wait for apiserver process to appear ...
	I1030 11:35:50.810458   14108 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:35:50.810474   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:51.526128   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:51.526232   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:51.538574   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:51.538664   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:51.554239   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:51.554318   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:51.564935   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:51.565014   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:51.575451   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:51.575531   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:51.586198   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:51.586284   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:51.597076   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:51.597150   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:51.607892   13969 logs.go:282] 0 containers: []
	W1030 11:35:51.607906   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:51.607975   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:51.618712   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:51.618727   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:51.618733   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:51.630998   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:51.631014   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:51.635526   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:51.635535   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:51.687061   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:51.687072   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:51.710574   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:51.710584   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:51.724827   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:51.724840   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:51.737283   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:51.737295   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:35:51.754562   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:51.754575   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:51.779086   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:51.779094   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:51.790517   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:51.790532   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:51.823774   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:51.823788   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:51.836961   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:51.836971   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:51.852286   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:51.852297   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:55.812502   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:55.812559   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:54.368718   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:00.812761   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:00.812791   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:59.371293   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:59.371474   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:35:59.390034   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:35:59.390123   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:35:59.400645   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:35:59.400714   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:35:59.411235   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:35:59.411305   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:35:59.421383   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:35:59.421449   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:35:59.431609   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:35:59.431693   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:35:59.446846   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:35:59.446919   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:35:59.456956   13969 logs.go:282] 0 containers: []
	W1030 11:35:59.456974   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:35:59.457034   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:35:59.467578   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:35:59.467591   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:35:59.467596   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:35:59.481392   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:35:59.481404   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:35:59.494783   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:35:59.494794   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:35:59.509324   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:35:59.509333   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:35:59.520857   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:35:59.520869   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:35:59.532799   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:35:59.532811   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:35:59.558500   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:35:59.558511   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:35:59.595632   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:35:59.595642   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:35:59.599920   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:35:59.599929   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:35:59.634723   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:35:59.634735   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:35:59.649326   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:35:59.649336   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:35:59.661540   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:35:59.661552   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:35:59.675665   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:35:59.675678   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:02.195751   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:05.813066   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:05.813088   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:07.197127   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:07.197296   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:07.208526   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:07.208612   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:07.221400   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:07.221485   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:07.234101   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:07.234183   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:07.246697   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:07.246786   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:07.258199   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:07.258281   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:07.268532   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:07.268610   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:07.278247   13969 logs.go:282] 0 containers: []
	W1030 11:36:07.278268   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:07.278337   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:07.289074   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:07.289090   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:07.289096   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:07.306880   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:07.306892   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:07.318638   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:07.318648   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:07.337713   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:07.337723   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:07.363708   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:07.363719   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:07.375822   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:07.375832   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:07.411076   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:07.411109   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:07.454343   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:07.454357   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:07.468754   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:07.468767   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:07.481851   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:07.481865   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:07.501065   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:07.501078   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:07.517679   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:07.517691   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:07.522601   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:07.522609   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:10.813509   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:10.813574   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:10.039977   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:15.814208   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:15.814270   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:15.042132   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:15.042345   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:15.063248   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:15.063369   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:15.077493   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:15.077574   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:15.089827   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:15.089900   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:15.101177   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:15.101255   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:15.112899   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:15.112981   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:15.124659   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:15.124731   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:15.135635   13969 logs.go:282] 0 containers: []
	W1030 11:36:15.135648   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:15.135706   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:15.148758   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:15.148780   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:15.148785   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:15.153484   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:15.153491   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:15.171860   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:15.171873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:15.194034   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:15.194046   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:15.217272   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:15.217283   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:15.250516   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:15.250526   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:15.285574   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:15.285585   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:15.300605   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:15.300616   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:15.311928   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:15.311939   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:15.323552   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:15.323562   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:15.340497   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:15.340507   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:15.352018   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:15.352028   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:15.369583   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:15.369593   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:17.884427   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:20.815064   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:20.815154   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:22.886832   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:22.887101   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:22.911419   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:22.911527   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:22.928220   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:22.928311   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:22.941030   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:22.941117   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:22.952296   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:22.952371   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:22.962486   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:22.962570   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:22.973320   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:22.973404   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:22.983616   13969 logs.go:282] 0 containers: []
	W1030 11:36:22.983631   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:22.983700   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:22.994426   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:22.994442   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:22.994448   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:23.006412   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:23.006424   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:23.023511   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:23.023522   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:23.048781   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:23.048793   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:23.084099   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:23.084112   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:23.089027   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:23.089036   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:23.104119   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:23.104130   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:23.118876   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:23.118887   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:23.130889   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:23.130900   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:23.143388   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:23.143402   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:23.167579   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:23.167595   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:25.816711   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:25.816758   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:23.179064   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:23.179077   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:23.212866   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:23.212877   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:25.726355   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:30.818218   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:30.818236   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:30.728200   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:30.728633   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:30.772474   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:30.772572   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:30.786287   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:30.786358   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:30.798181   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:30.798263   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:30.812396   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:30.812476   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:30.822707   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:30.822781   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:30.833126   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:30.833199   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:30.844189   13969 logs.go:282] 0 containers: []
	W1030 11:36:30.844201   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:30.844257   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:30.854551   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:30.854566   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:30.854571   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:30.878158   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:30.878166   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:30.889660   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:30.889671   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:30.905413   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:30.905423   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:30.919480   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:30.919494   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:30.931231   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:30.931244   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:30.943463   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:30.943476   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:30.963104   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:30.963114   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:30.974946   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:30.974961   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:31.009575   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:31.009583   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:31.014460   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:31.014467   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:31.050682   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:31.050696   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:31.063263   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:31.063278   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:35.820247   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:35.820286   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:33.579879   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:40.822547   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:40.822585   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:38.582039   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:38.582225   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:38.593350   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:38.593431   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:38.603957   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:38.604031   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:38.614857   13969 logs.go:282] 2 containers: [161e53b8f3c5 952bbd6d435a]
	I1030 11:36:38.614939   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:38.625636   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:38.625713   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:38.636153   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:38.636228   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:38.646603   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:38.646686   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:38.656954   13969 logs.go:282] 0 containers: []
	W1030 11:36:38.656965   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:38.657031   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:38.667552   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:38.667567   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:38.667573   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:38.681517   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:38.681529   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:38.693227   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:38.693237   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:38.707758   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:38.707772   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:38.719551   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:38.719560   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:38.736758   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:38.736777   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:38.760490   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:38.760498   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:38.793724   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:38.793732   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:38.808134   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:38.808145   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:38.820426   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:38.820437   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:38.832198   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:38.832210   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:38.843378   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:38.843392   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:38.847816   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:38.847824   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:41.385627   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:45.824863   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:45.824888   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:46.386013   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:46.386181   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:46.398942   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:46.399030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:46.409383   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:46.409462   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:46.421541   13969 logs.go:282] 3 containers: [d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:36:46.421621   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:46.432374   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:46.432456   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:46.442994   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:46.443075   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:46.454051   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:46.454131   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:46.463554   13969 logs.go:282] 0 containers: []
	W1030 11:36:46.463565   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:46.463626   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:46.474059   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:46.474075   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:46.474079   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:46.488367   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:46.488376   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:46.514294   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:46.514308   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:46.526397   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:46.526407   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:46.541864   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:46.541878   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:46.563456   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:46.563470   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:46.568120   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:46.568126   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:46.611592   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:46.611608   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:46.625885   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:36:46.625895   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:36:46.637650   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:46.637664   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:46.649982   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:46.649995   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:46.684291   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:46.684305   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:46.704054   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:46.704065   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:46.722739   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:46.722752   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:50.827014   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:50.827260   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:50.843414   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:36:50.843513   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:50.855807   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:36:50.855891   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:50.866615   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:36:50.866697   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:50.876914   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:36:50.876995   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:50.887399   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:36:50.887471   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:50.898200   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:36:50.898294   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:50.908440   14108 logs.go:282] 0 containers: []
	W1030 11:36:50.908461   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:50.908532   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:50.918869   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:36:50.918886   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:36:50.918891   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:36:50.932030   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:36:50.932040   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:36:50.947700   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:36:50.947710   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:50.962061   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:36:50.962074   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:36:50.977482   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:36:50.977494   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:36:50.988978   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:36:50.988989   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:36:51.004122   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:36:51.004134   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:36:51.015985   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:51.015997   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:51.043002   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:51.043011   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:51.047706   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:36:51.047715   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:36:51.061534   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:36:51.061544   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:36:51.088896   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:36:51.088915   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:36:51.103805   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:36:51.103815   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:36:51.116780   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:51.116800   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:51.156051   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:51.156061   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:51.258829   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:36:51.258841   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:36:51.276229   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:36:51.276241   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:36:49.240964   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:53.793606   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:54.243244   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:54.243416   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:54.259033   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:36:54.259130   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:54.275943   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:36:54.276030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:54.287350   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:36:54.287430   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:54.298359   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:36:54.298431   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:54.315005   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:36:54.315083   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:54.325699   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:36:54.325773   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:54.336262   13969 logs.go:282] 0 containers: []
	W1030 11:36:54.336275   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:54.336344   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:54.347335   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:36:54.347353   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:36:54.347358   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:36:54.359371   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:54.359383   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:54.384231   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:54.384238   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:54.418535   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:54.418545   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:54.422913   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:36:54.422920   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:36:54.434308   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:36:54.434321   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:36:54.445680   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:36:54.445692   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:36:54.460435   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:36:54.460445   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:54.472584   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:54.472609   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:54.511537   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:36:54.511547   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:36:54.525937   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:36:54.525946   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:36:54.541528   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:36:54.541543   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:36:54.563429   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:36:54.563439   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:36:54.576322   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:36:54.576335   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:36:54.587991   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:36:54.588002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:36:57.103596   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:58.795867   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:58.796119   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:58.820475   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:36:58.820607   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:58.837136   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:36:58.837231   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:58.851917   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:36:58.851998   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:58.862958   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:36:58.863035   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:58.873425   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:36:58.873502   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:58.884239   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:36:58.884321   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:58.894615   14108 logs.go:282] 0 containers: []
	W1030 11:36:58.894628   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:58.894712   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:58.905239   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:36:58.905258   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:36:58.905263   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:36:58.920836   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:36:58.920848   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:36:58.932715   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:58.932727   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:58.956839   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:58.956849   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:58.961105   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:36:58.961111   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:36:58.975194   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:36:58.975204   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:36:58.986781   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:36:58.986792   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:36:58.998434   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:36:58.998447   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:36:59.009465   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:59.009475   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:59.048413   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:59.048424   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:59.084281   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:36:59.084291   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:36:59.109420   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:36:59.109433   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:36:59.123831   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:36:59.123844   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:36:59.135464   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:36:59.135480   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:36:59.150560   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:36:59.150571   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:36:59.168095   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:36:59.168105   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:36:59.182151   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:36:59.182161   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:01.699716   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:02.105956   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:02.106178   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:02.126921   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:02.127037   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:02.146632   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:02.146715   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:02.158887   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:02.158976   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:02.169822   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:02.169900   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:02.180698   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:02.180769   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:02.191012   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:02.191080   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:02.208307   13969 logs.go:282] 0 containers: []
	W1030 11:37:02.208320   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:02.208390   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:02.218791   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:02.218811   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:02.218816   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:02.232844   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:02.232856   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:02.250991   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:02.251001   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:02.275735   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:02.275749   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:02.280237   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:02.280246   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:02.319515   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:02.319528   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:02.333694   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:02.333704   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:02.345491   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:02.345502   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:02.379074   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:02.379085   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:02.391664   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:02.391679   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:02.404412   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:02.404423   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:02.415982   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:02.415993   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:02.427611   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:02.427622   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:02.439637   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:02.439649   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:02.451079   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:02.451091   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:06.701978   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:06.702260   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:06.724670   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:06.724806   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:06.740015   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:06.740099   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:06.752230   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:06.752307   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:06.763080   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:06.763167   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:06.773808   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:06.773893   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:06.786369   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:06.786450   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:06.796948   14108 logs.go:282] 0 containers: []
	W1030 11:37:06.796960   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:06.797057   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:06.807590   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:06.807609   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:06.807616   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:06.822113   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:06.822123   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:06.836504   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:06.836515   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:06.851634   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:06.851646   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:06.892725   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:06.892737   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:06.904591   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:06.904605   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:06.919646   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:06.919657   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:06.945335   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:06.945346   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:06.959095   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:06.959105   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:06.970892   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:06.970902   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:06.986298   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:06.986312   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:07.011264   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:07.011272   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:07.015489   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:07.015498   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:07.052041   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:07.052055   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:07.063299   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:07.063311   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:07.074837   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:07.074846   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:07.096317   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:07.096327   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:04.970299   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:09.612441   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:09.972618   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:09.972788   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:09.988134   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:09.988235   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:10.001318   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:10.001400   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:10.017477   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:10.017563   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:10.028130   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:10.028208   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:10.038732   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:10.038805   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:10.049041   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:10.049123   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:10.059983   13969 logs.go:282] 0 containers: []
	W1030 11:37:10.059997   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:10.060060   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:10.070387   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:10.070404   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:10.070409   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:10.104597   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:10.104606   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:10.119572   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:10.119587   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:10.144637   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:10.144645   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:10.157017   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:10.157027   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:10.190637   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:10.190645   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:10.201856   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:10.201867   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:10.220101   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:10.220111   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:10.231798   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:10.231808   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:10.236160   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:10.236168   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:10.253323   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:10.253335   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:10.268687   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:10.268698   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:10.280231   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:10.280240   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:10.295599   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:10.295609   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:10.307168   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:10.307178   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:12.819176   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:14.614228   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:14.614365   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:14.629952   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:14.630038   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:14.640542   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:14.640610   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:14.651217   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:14.651298   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:14.661282   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:14.661367   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:14.671659   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:14.671726   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:14.682157   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:14.682228   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:14.692135   14108 logs.go:282] 0 containers: []
	W1030 11:37:14.692148   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:14.692207   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:14.703007   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:14.703025   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:14.703031   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:14.717078   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:14.717088   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:14.730850   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:14.730862   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:14.742773   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:14.742784   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:14.760583   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:14.760597   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:14.786528   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:14.786540   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:14.823879   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:14.823889   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:14.844560   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:14.844571   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:14.855893   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:14.855904   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:14.867457   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:14.867467   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:14.881901   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:14.881912   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:14.894052   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:14.894062   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:14.898167   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:14.898176   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:14.934237   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:14.934249   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:14.948618   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:14.948629   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:14.974590   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:14.974604   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:14.986551   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:14.986562   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:17.500619   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:17.820107   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:17.820252   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:17.834817   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:17.834912   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:17.847194   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:17.847273   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:17.858159   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:17.858236   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:17.869078   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:17.869159   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:17.883301   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:17.883383   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:17.894197   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:17.894281   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:17.904427   13969 logs.go:282] 0 containers: []
	W1030 11:37:17.904441   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:17.904512   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:17.920296   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:17.920313   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:17.920319   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:17.938108   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:17.938119   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:17.949863   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:17.949876   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:17.961433   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:17.961445   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:17.975690   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:17.975700   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:18.011589   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:18.011603   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:18.025504   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:18.025516   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:18.037682   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:18.037693   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:18.049415   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:18.049428   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:18.061132   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:18.061143   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:18.097039   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:18.097048   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:18.101542   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:18.101548   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:18.119458   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:18.119472   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:18.131992   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:18.132005   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:18.143487   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:18.143496   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:22.502978   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:22.503246   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:22.536575   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:22.536681   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:22.551134   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:22.551212   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:22.563680   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:22.563770   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:22.574626   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:22.574708   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:22.585537   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:22.585634   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:22.597154   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:22.597235   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:22.608215   14108 logs.go:282] 0 containers: []
	W1030 11:37:22.608227   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:22.608297   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:22.618539   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:22.618556   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:22.618562   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:22.631395   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:22.631408   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:22.668539   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:22.668548   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:20.670392   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:22.710763   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:22.710774   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:22.735692   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:22.735710   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:22.749920   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:22.749930   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:22.762797   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:22.762809   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:22.780600   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:22.780616   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:22.785042   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:22.785054   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:22.804775   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:22.804786   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:22.819057   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:22.819067   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:22.830042   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:22.830053   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:22.841398   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:22.841413   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:22.860182   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:22.860193   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:22.874326   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:22.874340   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:22.885706   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:22.885718   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:22.906168   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:22.906182   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:25.432711   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:25.672697   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:25.672896   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:25.696038   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:25.696139   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:25.709215   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:25.709299   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:25.720060   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:25.720139   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:25.730684   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:25.730773   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:25.741372   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:25.741449   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:25.752475   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:25.752547   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:25.765411   13969 logs.go:282] 0 containers: []
	W1030 11:37:25.765427   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:25.765493   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:25.775924   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:25.775940   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:25.775946   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:25.788414   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:25.788427   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:25.802991   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:25.803002   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:25.822214   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:25.822229   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:25.835382   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:25.835392   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:25.860721   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:25.860735   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:25.896171   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:25.896182   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:25.909943   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:25.909956   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:25.930330   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:25.930343   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:25.941926   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:25.941935   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:25.962677   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:25.962689   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:25.974185   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:25.974194   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:25.979193   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:25.979203   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:26.018539   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:26.018552   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:26.030462   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:26.030472   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:30.435091   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:30.435538   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:30.467569   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:30.467716   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:30.486548   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:30.486663   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:30.501385   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:30.501478   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:30.513330   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:30.513413   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:30.524663   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:30.524737   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:30.539458   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:30.539537   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:30.553881   14108 logs.go:282] 0 containers: []
	W1030 11:37:30.553892   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:30.553959   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:30.564320   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:30.564337   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:30.564342   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:30.607491   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:30.607506   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:30.622217   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:30.622228   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:30.640450   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:30.640461   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:30.652641   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:30.652654   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:30.667524   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:30.667536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:30.679118   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:30.679130   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:30.683797   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:30.683805   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:30.710043   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:30.710058   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:30.723662   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:30.723672   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:30.735043   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:30.735054   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:30.751646   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:30.751658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:30.787067   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:30.787077   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:30.799140   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:30.799151   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:30.813862   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:30.813874   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:30.832492   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:30.832505   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:30.845174   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:30.845185   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:28.545526   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:33.373748   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:33.547926   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:33.548131   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:33.572135   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:33.572239   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:33.586942   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:33.587030   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:33.599027   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:33.599108   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:33.610367   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:33.610440   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:33.621128   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:33.621210   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:33.631703   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:33.631783   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:33.642169   13969 logs.go:282] 0 containers: []
	W1030 11:37:33.642182   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:33.642245   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:33.652385   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:33.652401   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:33.652406   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:33.670029   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:33.670039   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:33.685427   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:33.685439   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:33.690287   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:33.690294   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:33.704317   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:33.704327   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:33.724210   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:33.724221   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:33.735978   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:33.735988   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:33.747900   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:33.747909   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:33.759012   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:33.759024   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:33.770715   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:33.770726   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:33.784889   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:33.784900   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:33.796556   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:33.796566   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:33.829775   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:33.829785   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:33.880063   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:33.880076   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:33.905018   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:33.905027   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:36.419433   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:38.376449   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:38.376990   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:38.417757   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:38.417923   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:38.439859   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:38.439989   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:38.456445   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:38.456542   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:38.468742   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:38.468827   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:38.479821   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:38.479906   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:38.491270   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:38.491348   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:38.503061   14108 logs.go:282] 0 containers: []
	W1030 11:37:38.503074   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:38.503146   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:38.514093   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:38.514112   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:38.514118   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:38.553359   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:38.553373   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:38.568847   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:38.568857   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:38.581160   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:38.581171   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:38.598089   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:38.598101   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:38.613085   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:38.613098   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:38.637064   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:38.637075   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:38.651602   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:38.651612   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:38.663281   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:38.663292   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:38.681285   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:38.681299   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:38.719488   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:38.719499   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:38.733574   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:38.733586   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:38.748143   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:38.748153   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:38.759317   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:38.759330   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:38.771208   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:38.771219   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:38.776092   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:38.776099   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:38.801670   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:38.801682   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:41.327522   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:41.421897   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:41.422284   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:41.453529   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:41.453664   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:41.472512   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:41.472622   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:41.487705   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:41.487798   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:41.501010   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:41.501081   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:41.511967   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:41.512050   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:41.522663   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:41.522744   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:41.547490   13969 logs.go:282] 0 containers: []
	W1030 11:37:41.547502   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:41.547567   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:41.557849   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:41.557866   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:41.557872   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:41.571944   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:41.571955   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:41.605399   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:41.605409   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:41.610048   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:41.610055   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:41.621819   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:41.621831   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:41.633810   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:41.633821   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:41.648804   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:41.648814   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:41.683521   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:41.683532   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:41.696231   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:41.696241   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:41.714031   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:41.714041   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:41.725632   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:41.725643   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:41.737253   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:41.737264   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:41.752149   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:41.752158   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:41.776387   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:41.776397   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:41.790216   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:41.790226   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:46.330213   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:46.330486   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:46.353715   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:46.353829   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:46.369124   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:46.369218   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:46.381897   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:46.381990   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:46.392944   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:46.393021   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:46.403520   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:46.403598   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:46.425522   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:46.425603   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:46.436472   14108 logs.go:282] 0 containers: []
	W1030 11:37:46.436484   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:46.436549   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:46.447300   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:46.447317   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:46.447323   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:46.481538   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:46.481552   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:46.495645   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:46.495658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:46.508085   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:46.508099   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:46.520850   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:46.520865   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:46.538087   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:46.538097   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:46.552545   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:46.552557   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:46.564498   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:46.564509   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:46.579262   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:46.579273   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:46.603158   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:46.603165   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:46.640297   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:46.640306   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:46.654738   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:46.654749   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:46.679744   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:46.679756   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:46.694840   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:46.694852   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:46.706152   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:46.706164   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:46.710332   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:46.710341   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:46.722419   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:46.722429   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:44.303219   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:49.238512   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:49.305510   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:49.305686   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:49.330872   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:49.331006   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:49.348354   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:49.348454   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:49.361204   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:49.361291   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:49.372342   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:49.372421   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:49.382919   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:49.382989   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:49.393145   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:49.393214   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:49.402819   13969 logs.go:282] 0 containers: []
	W1030 11:37:49.402829   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:49.402887   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:49.418921   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:49.418937   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:49.418943   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:49.452249   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:49.452258   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:49.464102   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:49.464113   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:49.475695   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:49.475707   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:49.487928   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:49.487939   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:49.494244   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:49.494252   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:49.505693   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:49.505704   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:49.517720   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:49.517731   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:49.553742   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:49.553753   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:49.565426   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:49.565435   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:49.580220   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:49.580231   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:49.597919   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:49.597928   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:49.622993   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:49.623003   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:49.637430   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:49.637443   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:49.654582   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:49.654592   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:52.168179   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:54.240827   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:54.240954   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:54.254569   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:54.254661   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:54.266479   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:54.266564   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:54.276865   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:54.276944   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:54.287938   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:54.288019   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:54.299212   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:54.299289   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:54.309992   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:54.310065   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:54.326312   14108 logs.go:282] 0 containers: []
	W1030 11:37:54.326325   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:54.326401   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:54.341284   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:54.341302   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:54.341308   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:54.345522   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:54.345531   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:54.359611   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:54.359621   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:54.371545   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:54.371557   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:54.396086   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:54.396095   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:54.409900   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:54.409910   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:54.424705   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:54.424714   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:54.435873   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:54.435882   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:54.450815   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:54.450826   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:54.464309   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:54.464321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:54.481377   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:54.481387   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:54.495164   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:54.495175   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:54.533407   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:54.533418   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:54.567957   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:54.567968   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:54.593625   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:54.593642   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:54.605308   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:54.605322   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:54.618154   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:54.618168   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:57.132426   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:57.170547   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:57.170952   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:57.207156   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:37:57.207313   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:57.228836   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:37:57.228947   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:57.244289   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:37:57.244392   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:57.257055   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:37:57.257132   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:57.267723   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:37:57.267803   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:57.278528   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:37:57.278608   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:57.288995   13969 logs.go:282] 0 containers: []
	W1030 11:37:57.289007   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:57.289069   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:57.299612   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:37:57.299631   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:57.299637   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:57.334158   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:37:57.334166   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:37:57.346131   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:37:57.346144   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:37:57.358566   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:57.358579   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:57.396383   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:37:57.396398   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:37:57.408226   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:37:57.408240   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:37:57.420032   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:37:57.420044   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:37:57.438259   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:37:57.438270   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:37:57.449725   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:57.449737   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:57.454233   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:37:57.454242   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:37:57.468813   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:57.468826   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:57.493934   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:37:57.493943   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:37:57.508619   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:37:57.508632   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:37:57.523317   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:37:57.523327   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:37:57.535263   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:37:57.535274   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:02.135081   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:02.135210   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:02.154060   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:02.154160   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:02.165036   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:02.165119   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:02.175404   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:02.175493   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:02.186807   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:02.186887   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:02.197569   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:02.197643   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:02.208146   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:02.208209   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:02.219076   14108 logs.go:282] 0 containers: []
	W1030 11:38:02.219088   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:02.219151   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:02.229437   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:02.229455   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:02.229461   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:02.249719   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:02.249732   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:02.286102   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:02.286110   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:02.297310   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:02.297322   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:02.308913   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:02.308923   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:02.326157   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:02.326170   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:02.337998   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:02.338011   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:02.372424   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:02.372435   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:02.386789   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:02.386804   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:02.398503   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:02.398517   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:02.410391   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:02.410400   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:02.414488   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:02.414495   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:02.429109   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:02.429119   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:02.454428   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:02.454439   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:02.468162   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:02.468172   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:02.482640   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:02.482650   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:02.494615   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:02.494625   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:00.051117   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:05.019103   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:05.053303   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:05.053509   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:05.070046   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:05.070153   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:05.082721   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:05.082805   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:05.094045   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:05.094138   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:05.105086   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:05.105167   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:05.117840   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:05.117926   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:05.128616   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:05.128697   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:05.138594   13969 logs.go:282] 0 containers: []
	W1030 11:38:05.138608   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:05.138672   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:05.149380   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:05.149396   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:05.149401   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:05.163313   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:05.163324   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:05.176609   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:05.176622   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:05.188054   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:05.188066   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:05.211677   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:05.211687   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:05.223584   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:05.223598   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:05.236397   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:05.236409   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:05.252574   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:05.252590   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:05.265848   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:05.265859   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:05.302218   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:05.302230   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:05.315003   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:05.315014   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:05.331856   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:05.331871   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:05.366554   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:05.366574   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:05.371686   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:05.371695   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:05.386126   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:05.386139   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:07.905594   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:10.021505   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:10.021600   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:10.033913   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:10.033996   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:10.044671   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:10.044751   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:10.055684   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:10.055763   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:10.066256   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:10.066334   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:10.076705   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:10.076780   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:10.087044   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:10.087125   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:10.097260   14108 logs.go:282] 0 containers: []
	W1030 11:38:10.097273   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:10.097338   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:10.108884   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:10.108902   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:10.108908   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:10.113273   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:10.113282   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:10.148132   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:10.148147   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:10.162776   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:10.162788   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:10.202009   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:10.202020   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:10.230914   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:10.230925   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:10.249692   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:10.249703   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:10.261535   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:10.261547   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:10.273238   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:10.273251   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:10.298313   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:10.298321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:10.313844   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:10.313854   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:10.327804   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:10.327815   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:10.339703   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:10.339714   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:10.357113   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:10.357123   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:10.371153   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:10.371165   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:10.383775   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:10.383785   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:10.395534   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:10.395545   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:12.907938   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:12.908208   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:12.933907   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:12.934021   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:12.951525   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:12.951626   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:12.965739   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:12.965919   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:12.978352   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:12.978432   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:12.989160   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:12.989228   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:13.000467   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:13.000543   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:13.011110   13969 logs.go:282] 0 containers: []
	W1030 11:38:13.011119   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:13.011184   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:13.021674   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:13.021689   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:13.021695   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:13.045666   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:13.045688   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:13.065847   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:13.065858   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:13.089392   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:13.089406   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:13.125180   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:13.125190   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:13.137567   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:13.137576   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:13.151501   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:13.151515   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:13.163471   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:13.163481   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:12.908110   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:13.180955   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:13.180964   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:13.185840   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:13.185845   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:13.197487   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:13.197496   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:13.234160   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:13.234175   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:13.246592   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:13.246601   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:13.257801   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:13.257811   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:13.269841   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:13.269855   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:15.787676   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:17.910501   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:17.910753   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:17.938216   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:17.938335   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:17.955821   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:17.955913   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:17.968130   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:17.968217   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:17.979353   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:17.979430   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:17.989797   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:17.989871   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:18.000242   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:18.000321   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:18.018071   14108 logs.go:282] 0 containers: []
	W1030 11:38:18.018082   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:18.018148   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:18.032821   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:18.032839   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:18.032845   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:18.072099   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:18.072109   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:18.076525   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:18.076533   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:18.112520   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:18.112532   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:18.137537   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:18.137548   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:18.149397   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:18.149409   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:18.163562   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:18.163574   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:18.188478   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:18.188489   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:18.202787   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:18.202799   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:18.214278   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:18.214292   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:18.228515   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:18.228529   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:18.239490   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:18.239500   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:18.253336   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:18.253348   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:18.265174   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:18.265186   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:18.276904   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:18.276915   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:18.299119   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:18.299133   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:18.313381   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:18.313392   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:20.827313   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:20.790007   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:20.790315   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:20.816148   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:20.816280   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:20.833408   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:20.833485   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:20.846432   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:20.846516   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:20.857100   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:20.857175   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:20.867470   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:20.867552   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:20.878781   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:20.878855   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:20.890896   13969 logs.go:282] 0 containers: []
	W1030 11:38:20.890910   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:20.890978   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:20.902484   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:20.902500   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:20.902505   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:20.937443   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:20.937453   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:20.973500   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:20.973511   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:20.985896   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:20.985908   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:20.997492   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:20.997505   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:21.012012   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:21.012023   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:21.017007   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:21.017013   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:21.034295   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:21.034306   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:21.052667   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:21.052677   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:21.067026   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:21.067036   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:21.078571   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:21.078580   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:21.095720   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:21.095731   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:21.108110   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:21.108123   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:21.120008   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:21.120020   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:21.132221   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:21.132232   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:25.829604   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:25.830202   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:25.876915   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:25.877065   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:25.898508   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:25.898606   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:25.912772   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:25.912865   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:25.924100   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:25.924181   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:25.934543   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:25.934624   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:25.945380   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:25.945450   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:25.955684   14108 logs.go:282] 0 containers: []
	W1030 11:38:25.955698   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:25.955765   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:25.965936   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:25.965956   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:25.965962   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:26.003716   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:26.003725   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:26.020748   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:26.020760   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:26.038465   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:26.038478   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:26.049911   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:26.049924   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:26.065742   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:26.065752   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:26.080556   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:26.080567   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:26.093525   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:26.093536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:26.121341   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:26.121354   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:26.135976   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:26.135987   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:26.160215   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:26.160224   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:26.171964   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:26.171975   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:26.184169   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:26.184179   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:26.223394   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:26.223404   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:26.228297   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:26.228305   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:26.253429   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:26.253438   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:26.269209   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:26.269221   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:23.659573   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:28.790353   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:28.661732   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:28.661858   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:28.672849   13969 logs.go:282] 1 containers: [c0bf75261edd]
	I1030 11:38:28.672934   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:28.683953   13969 logs.go:282] 1 containers: [20f8cd717ba5]
	I1030 11:38:28.684036   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:28.694582   13969 logs.go:282] 4 containers: [547a9ceba079 d7158323063f 161e53b8f3c5 952bbd6d435a]
	I1030 11:38:28.694666   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:28.705202   13969 logs.go:282] 1 containers: [f47049212904]
	I1030 11:38:28.705273   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:28.715826   13969 logs.go:282] 1 containers: [64e0d55e4835]
	I1030 11:38:28.715901   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:28.726292   13969 logs.go:282] 1 containers: [c84340d817e1]
	I1030 11:38:28.726370   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:28.736469   13969 logs.go:282] 0 containers: []
	W1030 11:38:28.736481   13969 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:28.736542   13969 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:28.747385   13969 logs.go:282] 1 containers: [bd8729aef14c]
	I1030 11:38:28.747400   13969 logs.go:123] Gathering logs for kube-apiserver [c0bf75261edd] ...
	I1030 11:38:28.747405   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0bf75261edd"
	I1030 11:38:28.761781   13969 logs.go:123] Gathering logs for etcd [20f8cd717ba5] ...
	I1030 11:38:28.761794   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20f8cd717ba5"
	I1030 11:38:28.778863   13969 logs.go:123] Gathering logs for coredns [d7158323063f] ...
	I1030 11:38:28.778873   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7158323063f"
	I1030 11:38:28.791016   13969 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:28.791025   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:28.825264   13969 logs.go:123] Gathering logs for coredns [547a9ceba079] ...
	I1030 11:38:28.825273   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 547a9ceba079"
	I1030 11:38:28.836768   13969 logs.go:123] Gathering logs for container status ...
	I1030 11:38:28.836780   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:28.849650   13969 logs.go:123] Gathering logs for kube-proxy [64e0d55e4835] ...
	I1030 11:38:28.849659   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e0d55e4835"
	I1030 11:38:28.861671   13969 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:28.861681   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:28.886224   13969 logs.go:123] Gathering logs for coredns [952bbd6d435a] ...
	I1030 11:38:28.886236   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952bbd6d435a"
	I1030 11:38:28.898448   13969 logs.go:123] Gathering logs for kube-scheduler [f47049212904] ...
	I1030 11:38:28.898458   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f47049212904"
	I1030 11:38:28.917517   13969 logs.go:123] Gathering logs for kube-controller-manager [c84340d817e1] ...
	I1030 11:38:28.917526   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84340d817e1"
	I1030 11:38:28.934836   13969 logs.go:123] Gathering logs for storage-provisioner [bd8729aef14c] ...
	I1030 11:38:28.934847   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd8729aef14c"
	I1030 11:38:28.946337   13969 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:28.946347   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:28.950655   13969 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:28.950661   13969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:28.984742   13969 logs.go:123] Gathering logs for coredns [161e53b8f3c5] ...
	I1030 11:38:28.984752   13969 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 161e53b8f3c5"
	I1030 11:38:31.498508   13969 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:36.500800   13969 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:36.503776   13969 out.go:201] 
	W1030 11:38:36.508750   13969 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1030 11:38:36.508756   13969 out.go:270] * 
	W1030 11:38:36.509189   13969 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:38:36.520797   13969 out.go:201] 
	I1030 11:38:33.791469   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:33.791801   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:33.819613   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:33.819765   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:33.837098   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:33.837195   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:33.851073   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:33.851159   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:33.862782   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:33.862868   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:33.876756   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:33.876836   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:33.887730   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:33.887812   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:33.897726   14108 logs.go:282] 0 containers: []
	W1030 11:38:33.897742   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:33.897809   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:33.908865   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:33.908883   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:33.908888   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:33.913221   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:33.913227   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:33.925204   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:33.925216   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:33.948614   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:33.948623   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:33.961118   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:33.961133   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:33.997862   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:33.997873   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:34.012612   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:34.012626   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:34.027432   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:34.027446   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:34.039559   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:34.039573   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:34.056439   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:34.056453   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:34.074260   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:34.074271   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:34.113524   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:34.113536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:34.127793   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:34.127804   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:34.141909   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:34.141919   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:34.154585   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:34.154599   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:34.169689   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:34.169699   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:34.217510   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:34.217520   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:36.731846   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:41.733989   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:41.734259   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:41.760028   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:41.760140   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:41.774468   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:41.774557   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:41.791110   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:41.791188   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:41.802227   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:41.802313   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:41.813141   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:41.813217   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:41.823462   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:41.823539   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:41.834504   14108 logs.go:282] 0 containers: []
	W1030 11:38:41.834517   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:41.834582   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:41.847012   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:41.847029   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:41.847035   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:41.851840   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:41.851846   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:41.866148   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:41.866163   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:41.883499   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:41.883509   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:41.897808   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:41.897818   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:41.935651   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:41.935658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:41.971632   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:41.971641   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:41.985733   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:41.985744   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:41.999840   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:41.999849   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:42.012302   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:42.012314   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:42.025871   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:42.025885   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:42.050726   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:42.050747   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:42.062951   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:42.062961   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:42.074563   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:42.074575   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:42.086621   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:42.086630   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:42.101164   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:42.101175   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:42.132480   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:42.132490   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:44.653711   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-10-30 18:29:46 UTC, ends at Wed 2024-10-30 18:38:52 UTC. --
	Oct 30 18:38:36 running-upgrade-135000 dockerd[3190]: time="2024-10-30T18:38:36.636228988Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/086a3084b98e5242d6ac52ce537e1828be4191d5a82f309ee7c534dc62d0ce72 pid=19310 runtime=io.containerd.runc.v2
	Oct 30 18:38:37 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:37Z" level=error msg="ContainerStats resp: {0x400041e440 linux}"
	Oct 30 18:38:37 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:37Z" level=error msg="ContainerStats resp: {0x4000664f80 linux}"
	Oct 30 18:38:37 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:37Z" level=error msg="ContainerStats resp: {0x4000874e40 linux}"
	Oct 30 18:38:38 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:38Z" level=error msg="ContainerStats resp: {0x40009ac280 linux}"
	Oct 30 18:38:38 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x400096b180 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x40009acf40 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x400096bdc0 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x40009ad840 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x40000b9c40 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x40009adfc0 linux}"
	Oct 30 18:38:39 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:39Z" level=error msg="ContainerStats resp: {0x4000356840 linux}"
	Oct 30 18:38:43 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 30 18:38:48 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 30 18:38:49 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:49Z" level=error msg="ContainerStats resp: {0x4000777c00 linux}"
	Oct 30 18:38:49 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:49Z" level=error msg="ContainerStats resp: {0x4000665d80 linux}"
	Oct 30 18:38:50 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:50Z" level=error msg="ContainerStats resp: {0x4000875ac0 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x40009ad300 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x40009ad740 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x40003572c0 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x4000357b00 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x4000698240 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x40007e4c00 linux}"
	Oct 30 18:38:51 running-upgrade-135000 cri-dockerd[3033]: time="2024-10-30T18:38:51Z" level=error msg="ContainerStats resp: {0x4000698200 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	086a3084b98e5       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   18352b9010d63
	b8aa9b2d5af20       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   3b468a979e545
	547a9ceba0791       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   18352b9010d63
	d7158323063fd       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3b468a979e545
	64e0d55e4835f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   e1916e6ae86ab
	bd8729aef14c6       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   750ad7cf11b18
	20f8cd717ba5d       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0c3f487aeeb3b
	f470492129049       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9a4510a7b05ec
	c84340d817e13       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   d817224c37843
	c0bf75261edd0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   e3e628bde5659
	
	
	==> coredns [086a3084b98e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7331953063003127799.2966784524224513109. HINFO: read udp 10.244.0.3:40520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7331953063003127799.2966784524224513109. HINFO: read udp 10.244.0.3:48855->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7331953063003127799.2966784524224513109. HINFO: read udp 10.244.0.3:59205->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7331953063003127799.2966784524224513109. HINFO: read udp 10.244.0.3:34433->10.0.2.3:53: i/o timeout
	
	
	==> coredns [547a9ceba079] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:35647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:59175->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:33553->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:53933->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:52812->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:41758->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:50409->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:44037->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:54749->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5287070997711220430.8790313744947121945. HINFO: read udp 10.244.0.3:49252->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8aa9b2d5af2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1633235936212187510.4946317190104270907. HINFO: read udp 10.244.0.2:52730->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1633235936212187510.4946317190104270907. HINFO: read udp 10.244.0.2:56692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1633235936212187510.4946317190104270907. HINFO: read udp 10.244.0.2:42418->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1633235936212187510.4946317190104270907. HINFO: read udp 10.244.0.2:58885->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d7158323063f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:56795->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:60163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:44491->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:33386->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:33276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:36930->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:32951->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:44410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 252196919990442761.2575602575129407938. HINFO: read udp 10.244.0.2:44380->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-135000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-135000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0
	                    minikube.k8s.io/name=running-upgrade-135000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_30T11_34_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Oct 2024 18:34:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-135000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Oct 2024 18:38:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Oct 2024 18:34:35 +0000   Wed, 30 Oct 2024 18:34:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Oct 2024 18:34:35 +0000   Wed, 30 Oct 2024 18:34:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Oct 2024 18:34:35 +0000   Wed, 30 Oct 2024 18:34:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Oct 2024 18:34:35 +0000   Wed, 30 Oct 2024 18:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-135000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eea4b5ad807423eb38d825d671b34b6
	  System UUID:                9eea4b5ad807423eb38d825d671b34b6
	  Boot ID:                    1e473363-0f82-497b-b7bd-0f47a26ca167
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8lxx6                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-phfzq                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-135000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-135000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-135000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-xxz2c                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-135000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-135000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-135000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-135000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-135000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-135000 event: Registered Node running-upgrade-135000 in Controller
	
	
	==> dmesg <==
	[  +1.715506] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.082978] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.079252] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.146951] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.080106] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.071654] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[Oct30 18:30] systemd-fstab-generator[1282]: Ignoring "noauto" for root device
	[  +9.661818] systemd-fstab-generator[1917]: Ignoring "noauto" for root device
	[  +2.697836] systemd-fstab-generator[2195]: Ignoring "noauto" for root device
	[  +0.141725] systemd-fstab-generator[2228]: Ignoring "noauto" for root device
	[  +0.101003] systemd-fstab-generator[2239]: Ignoring "noauto" for root device
	[  +0.091704] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +2.504149] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.203946] systemd-fstab-generator[2990]: Ignoring "noauto" for root device
	[  +0.079836] systemd-fstab-generator[3001]: Ignoring "noauto" for root device
	[  +0.081669] systemd-fstab-generator[3012]: Ignoring "noauto" for root device
	[  +0.093440] systemd-fstab-generator[3026]: Ignoring "noauto" for root device
	[  +2.324051] systemd-fstab-generator[3177]: Ignoring "noauto" for root device
	[  +3.012978] systemd-fstab-generator[3876]: Ignoring "noauto" for root device
	[  +1.983483] systemd-fstab-generator[4407]: Ignoring "noauto" for root device
	[ +18.838721] kauditd_printk_skb: 68 callbacks suppressed
	[Oct30 18:34] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.348272] systemd-fstab-generator[12413]: Ignoring "noauto" for root device
	[  +5.656149] systemd-fstab-generator[13003]: Ignoring "noauto" for root device
	[  +0.452591] systemd-fstab-generator[13136]: Ignoring "noauto" for root device
	
	
	==> etcd [20f8cd717ba5] <==
	{"level":"info","ts":"2024-10-30T18:34:31.013Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-30T18:34:31.015Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-30T18:34:31.013Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-30T18:34:31.017Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-30T18:34:31.015Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-30T18:34:31.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-30T18:34:31.017Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-30T18:34:31.700Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T18:34:31.701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T18:34:31.701Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T18:34:31.701Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-30T18:34:31.701Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-135000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-30T18:34:31.701Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T18:34:31.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-30T18:34:31.702Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-30T18:34:31.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-30T18:34:31.702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-30T18:34:31.702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:38:52 up 9 min,  0 users,  load average: 0.19, 0.27, 0.15
	Linux running-upgrade-135000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c0bf75261edd] <==
	I1030 18:34:32.867894       1 controller.go:611] quota admission added evaluator for: namespaces
	I1030 18:34:32.915429       1 cache.go:39] Caches are synced for autoregister controller
	I1030 18:34:32.916776       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1030 18:34:32.916884       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1030 18:34:32.927663       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1030 18:34:32.929730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1030 18:34:32.935630       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1030 18:34:33.670242       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1030 18:34:33.828042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1030 18:34:33.837261       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1030 18:34:33.837303       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1030 18:34:33.985394       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1030 18:34:33.994960       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 18:34:34.074611       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1030 18:34:34.076735       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1030 18:34:34.077055       1 controller.go:611] quota admission added evaluator for: endpoints
	I1030 18:34:34.078305       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 18:34:34.962030       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1030 18:34:35.515650       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1030 18:34:35.518903       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1030 18:34:35.525046       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1030 18:34:35.574092       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1030 18:34:48.568054       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1030 18:34:48.616986       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1030 18:34:49.092813       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c84340d817e1] <==
	I1030 18:34:47.862050       1 shared_informer.go:262] Caches are synced for PVC protection
	I1030 18:34:47.862075       1 shared_informer.go:262] Caches are synced for taint
	I1030 18:34:47.862177       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1030 18:34:47.862218       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-135000. Assuming now as a timestamp.
	I1030 18:34:47.862278       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1030 18:34:47.862298       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1030 18:34:47.862424       1 event.go:294] "Event occurred" object="running-upgrade-135000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-135000 event: Registered Node running-upgrade-135000 in Controller"
	I1030 18:34:47.863227       1 shared_informer.go:262] Caches are synced for deployment
	I1030 18:34:47.863621       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1030 18:34:47.864820       1 shared_informer.go:262] Caches are synced for GC
	I1030 18:34:47.870494       1 shared_informer.go:262] Caches are synced for endpoint
	I1030 18:34:47.875122       1 shared_informer.go:262] Caches are synced for resource quota
	I1030 18:34:47.884294       1 shared_informer.go:262] Caches are synced for disruption
	I1030 18:34:47.884303       1 disruption.go:371] Sending events to api server.
	I1030 18:34:47.894472       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1030 18:34:47.912968       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1030 18:34:47.918103       1 shared_informer.go:262] Caches are synced for resource quota
	I1030 18:34:48.063077       1 shared_informer.go:262] Caches are synced for attach detach
	I1030 18:34:48.431973       1 shared_informer.go:262] Caches are synced for garbage collector
	I1030 18:34:48.482547       1 shared_informer.go:262] Caches are synced for garbage collector
	I1030 18:34:48.482563       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1030 18:34:48.571564       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xxz2c"
	I1030 18:34:48.618705       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1030 18:34:48.819384       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-phfzq"
	I1030 18:34:48.823604       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8lxx6"
	
	
	==> kube-proxy [64e0d55e4835] <==
	I1030 18:34:49.074363       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1030 18:34:49.074390       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1030 18:34:49.074401       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1030 18:34:49.090690       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1030 18:34:49.090703       1 server_others.go:206] "Using iptables Proxier"
	I1030 18:34:49.090777       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1030 18:34:49.090906       1 server.go:661] "Version info" version="v1.24.1"
	I1030 18:34:49.090915       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 18:34:49.091360       1 config.go:317] "Starting service config controller"
	I1030 18:34:49.091376       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1030 18:34:49.091414       1 config.go:226] "Starting endpoint slice config controller"
	I1030 18:34:49.091423       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1030 18:34:49.091722       1 config.go:444] "Starting node config controller"
	I1030 18:34:49.091747       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1030 18:34:49.191626       1 shared_informer.go:262] Caches are synced for service config
	I1030 18:34:49.191626       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1030 18:34:49.191840       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f47049212904] <==
	W1030 18:34:32.870933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 18:34:32.870946       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1030 18:34:32.870962       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 18:34:32.870985       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1030 18:34:32.871002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 18:34:32.871017       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1030 18:34:32.871066       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1030 18:34:32.871074       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1030 18:34:32.871110       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 18:34:32.871116       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1030 18:34:32.871148       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 18:34:32.871189       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1030 18:34:32.871229       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1030 18:34:32.871258       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1030 18:34:32.871288       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 18:34:32.871295       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1030 18:34:32.871307       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 18:34:32.871326       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1030 18:34:33.701926       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 18:34:33.701997       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1030 18:34:33.743294       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1030 18:34:33.743423       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1030 18:34:33.833301       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 18:34:33.833376       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1030 18:34:34.365657       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-10-30 18:29:46 UTC, ends at Wed 2024-10-30 18:38:53 UTC. --
	Oct 30 18:34:35 running-upgrade-135000 kubelet[13009]: I1030 18:34:35.874168   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cab15d76c220d5b68e88ef6b38413679-usr-share-ca-certificates\") pod \"kube-apiserver-running-upgrade-135000\" (UID: \"cab15d76c220d5b68e88ef6b38413679\") " pod="kube-system/kube-apiserver-running-upgrade-135000"
	Oct 30 18:34:35 running-upgrade-135000 kubelet[13009]: I1030 18:34:35.874177   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1095c0d9cd204d7d4b7108385565b362-flexvolume-dir\") pod \"kube-controller-manager-running-upgrade-135000\" (UID: \"1095c0d9cd204d7d4b7108385565b362\") " pod="kube-system/kube-controller-manager-running-upgrade-135000"
	Oct 30 18:34:35 running-upgrade-135000 kubelet[13009]: I1030 18:34:35.874181   13009 reconciler.go:157] "Reconciler: start to sync state"
	Oct 30 18:34:36 running-upgrade-135000 kubelet[13009]: E1030 18:34:36.148444   13009 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-135000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-135000"
	Oct 30 18:34:47 running-upgrade-135000 kubelet[13009]: I1030 18:34:47.766717   13009 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 30 18:34:47 running-upgrade-135000 kubelet[13009]: I1030 18:34:47.767057   13009 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 30 18:34:47 running-upgrade-135000 kubelet[13009]: I1030 18:34:47.868226   13009 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.068782   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8e80deac-963d-4d71-8bf8-e51f8b81dfc8-tmp\") pod \"storage-provisioner\" (UID: \"8e80deac-963d-4d71-8bf8-e51f8b81dfc8\") " pod="kube-system/storage-provisioner"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.068821   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm65\" (UniqueName: \"kubernetes.io/projected/8e80deac-963d-4d71-8bf8-e51f8b81dfc8-kube-api-access-xsm65\") pod \"storage-provisioner\" (UID: \"8e80deac-963d-4d71-8bf8-e51f8b81dfc8\") " pod="kube-system/storage-provisioner"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: E1030 18:34:48.175072   13009 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: E1030 18:34:48.175151   13009 projected.go:192] Error preparing data for projected volume kube-api-access-xsm65 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: E1030 18:34:48.175209   13009 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8e80deac-963d-4d71-8bf8-e51f8b81dfc8-kube-api-access-xsm65 podName:8e80deac-963d-4d71-8bf8-e51f8b81dfc8 nodeName:}" failed. No retries permitted until 2024-10-30 18:34:48.675185421 +0000 UTC m=+13.172242497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xsm65" (UniqueName: "kubernetes.io/projected/8e80deac-963d-4d71-8bf8-e51f8b81dfc8-kube-api-access-xsm65") pod "storage-provisioner" (UID: "8e80deac-963d-4d71-8bf8-e51f8b81dfc8") : configmap "kube-root-ca.crt" not found
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.574988   13009 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.775133   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bb9\" (UniqueName: \"kubernetes.io/projected/4036b606-1820-4dec-93c6-13d71747825d-kube-api-access-d9bb9\") pod \"kube-proxy-xxz2c\" (UID: \"4036b606-1820-4dec-93c6-13d71747825d\") " pod="kube-system/kube-proxy-xxz2c"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.775163   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4036b606-1820-4dec-93c6-13d71747825d-lib-modules\") pod \"kube-proxy-xxz2c\" (UID: \"4036b606-1820-4dec-93c6-13d71747825d\") " pod="kube-system/kube-proxy-xxz2c"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.775175   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4036b606-1820-4dec-93c6-13d71747825d-kube-proxy\") pod \"kube-proxy-xxz2c\" (UID: \"4036b606-1820-4dec-93c6-13d71747825d\") " pod="kube-system/kube-proxy-xxz2c"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.775185   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4036b606-1820-4dec-93c6-13d71747825d-xtables-lock\") pod \"kube-proxy-xxz2c\" (UID: \"4036b606-1820-4dec-93c6-13d71747825d\") " pod="kube-system/kube-proxy-xxz2c"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.823299   13009 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.831426   13009 topology_manager.go:200] "Topology Admit Handler"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.976270   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fa825b-8686-4ef9-905f-e7ae62983c46-config-volume\") pod \"coredns-6d4b75cb6d-phfzq\" (UID: \"52fa825b-8686-4ef9-905f-e7ae62983c46\") " pod="kube-system/coredns-6d4b75cb6d-phfzq"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.976296   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz42c\" (UniqueName: \"kubernetes.io/projected/52fa825b-8686-4ef9-905f-e7ae62983c46-kube-api-access-qz42c\") pod \"coredns-6d4b75cb6d-phfzq\" (UID: \"52fa825b-8686-4ef9-905f-e7ae62983c46\") " pod="kube-system/coredns-6d4b75cb6d-phfzq"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.976309   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7wr\" (UniqueName: \"kubernetes.io/projected/75db840d-8efd-4f83-a0af-49e6b9cabb4e-kube-api-access-qz7wr\") pod \"coredns-6d4b75cb6d-8lxx6\" (UID: \"75db840d-8efd-4f83-a0af-49e6b9cabb4e\") " pod="kube-system/coredns-6d4b75cb6d-8lxx6"
	Oct 30 18:34:48 running-upgrade-135000 kubelet[13009]: I1030 18:34:48.976320   13009 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75db840d-8efd-4f83-a0af-49e6b9cabb4e-config-volume\") pod \"coredns-6d4b75cb6d-8lxx6\" (UID: \"75db840d-8efd-4f83-a0af-49e6b9cabb4e\") " pod="kube-system/coredns-6d4b75cb6d-8lxx6"
	Oct 30 18:38:36 running-upgrade-135000 kubelet[13009]: I1030 18:38:36.229720   13009 scope.go:110] "RemoveContainer" containerID="952bbd6d435af2bc077317b9a2a6b555d1fa39f895ef2d06bd38c99b5abfc376"
	Oct 30 18:38:37 running-upgrade-135000 kubelet[13009]: I1030 18:38:37.241240   13009 scope.go:110] "RemoveContainer" containerID="161e53b8f3c5e386c15e70ab589bc510f84db57b6f93c5f5efc3f58255d04f9f"
	
	
	==> storage-provisioner [bd8729aef14c] <==
	I1030 18:34:49.009523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 18:34:49.017373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 18:34:49.017649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 18:34:49.023949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 18:34:49.025585       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-135000_976126af-e9e1-4be7-81e0-23fec31b7cee!
	I1030 18:34:49.025705       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9275e3bd-b0c8-4d50-a9c1-df5f4a9631ec", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-135000_976126af-e9e1-4be7-81e0-23fec31b7cee became leader
	I1030 18:34:49.126702       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-135000_976126af-e9e1-4be7-81e0-23fec31b7cee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-135000 -n running-upgrade-135000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-135000 -n running-upgrade-135000: exit status 2 (15.777668541s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-135000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-135000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-135000
--- FAIL: TestRunningBinaryUpgrade (596.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.125058959s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-816000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-816000" primary control-plane node in "kubernetes-upgrade-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:32:12.207206   14050 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:32:12.207629   14050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:32:12.207635   14050 out.go:358] Setting ErrFile to fd 2...
	I1030 11:32:12.207638   14050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:32:12.207817   14050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:32:12.209361   14050 out.go:352] Setting JSON to false
	I1030 11:32:12.227966   14050 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7303,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:32:12.228063   14050 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:32:12.233705   14050 out.go:177] * [kubernetes-upgrade-816000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:32:12.241876   14050 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:32:12.241992   14050 notify.go:220] Checking for updates...
	I1030 11:32:12.248821   14050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:32:12.251834   14050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:32:12.254836   14050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:32:12.257890   14050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:32:12.260894   14050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:32:12.264121   14050 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:32:12.264199   14050 config.go:182] Loaded profile config "running-upgrade-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:32:12.264254   14050 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:32:12.268796   14050 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:32:12.274826   14050 start.go:297] selected driver: qemu2
	I1030 11:32:12.274840   14050 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:32:12.274847   14050 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:32:12.277368   14050 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:32:12.280790   14050 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:32:12.283946   14050 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:32:12.283959   14050 cni.go:84] Creating CNI manager for ""
	I1030 11:32:12.283980   14050 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1030 11:32:12.284013   14050 start.go:340] cluster config:
	{Name:kubernetes-upgrade-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:32:12.288583   14050 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:32:12.296875   14050 out.go:177] * Starting "kubernetes-upgrade-816000" primary control-plane node in "kubernetes-upgrade-816000" cluster
	I1030 11:32:12.300803   14050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:32:12.300819   14050 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:32:12.300824   14050 cache.go:56] Caching tarball of preloaded images
	I1030 11:32:12.300895   14050 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:32:12.300901   14050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1030 11:32:12.300945   14050 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kubernetes-upgrade-816000/config.json ...
	I1030 11:32:12.300956   14050 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kubernetes-upgrade-816000/config.json: {Name:mk675d6ff3dfa0fc17d4c18067c45da1f6520eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:32:12.301284   14050 start.go:360] acquireMachinesLock for kubernetes-upgrade-816000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:32:12.301329   14050 start.go:364] duration metric: took 39.584µs to acquireMachinesLock for "kubernetes-upgrade-816000"
	I1030 11:32:12.301341   14050 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:32:12.301376   14050 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:32:12.309847   14050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:32:12.335606   14050 start.go:159] libmachine.API.Create for "kubernetes-upgrade-816000" (driver="qemu2")
	I1030 11:32:12.335633   14050 client.go:168] LocalClient.Create starting
	I1030 11:32:12.335707   14050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:32:12.335754   14050 main.go:141] libmachine: Decoding PEM data...
	I1030 11:32:12.335766   14050 main.go:141] libmachine: Parsing certificate...
	I1030 11:32:12.335816   14050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:32:12.335845   14050 main.go:141] libmachine: Decoding PEM data...
	I1030 11:32:12.335853   14050 main.go:141] libmachine: Parsing certificate...
	I1030 11:32:12.336241   14050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:32:12.567892   14050 main.go:141] libmachine: Creating SSH key...
	I1030 11:32:12.604042   14050 main.go:141] libmachine: Creating Disk image...
	I1030 11:32:12.604049   14050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:32:12.604246   14050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:12.616155   14050 main.go:141] libmachine: STDOUT: 
	I1030 11:32:12.616175   14050 main.go:141] libmachine: STDERR: 
	I1030 11:32:12.616236   14050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2 +20000M
	I1030 11:32:12.624975   14050 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:32:12.624991   14050 main.go:141] libmachine: STDERR: 
	I1030 11:32:12.625010   14050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:12.625017   14050 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:32:12.625029   14050 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:32:12.625061   14050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a2:3c:0d:f0:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:12.626983   14050 main.go:141] libmachine: STDOUT: 
	I1030 11:32:12.626996   14050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:32:12.627017   14050 client.go:171] duration metric: took 291.380875ms to LocalClient.Create
	I1030 11:32:14.629122   14050 start.go:128] duration metric: took 2.327756875s to createHost
	I1030 11:32:14.629166   14050 start.go:83] releasing machines lock for "kubernetes-upgrade-816000", held for 2.327858333s
	W1030 11:32:14.629198   14050 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:32:14.639517   14050 out.go:177] * Deleting "kubernetes-upgrade-816000" in qemu2 ...
	W1030 11:32:14.665614   14050 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:32:14.665628   14050 start.go:729] Will try again in 5 seconds ...
	I1030 11:32:19.667748   14050 start.go:360] acquireMachinesLock for kubernetes-upgrade-816000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:32:19.668134   14050 start.go:364] duration metric: took 308.375µs to acquireMachinesLock for "kubernetes-upgrade-816000"
	I1030 11:32:19.668204   14050 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:32:19.668333   14050 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:32:19.678769   14050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:32:19.712662   14050 start.go:159] libmachine.API.Create for "kubernetes-upgrade-816000" (driver="qemu2")
	I1030 11:32:19.712714   14050 client.go:168] LocalClient.Create starting
	I1030 11:32:19.712833   14050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:32:19.712909   14050 main.go:141] libmachine: Decoding PEM data...
	I1030 11:32:19.712927   14050 main.go:141] libmachine: Parsing certificate...
	I1030 11:32:19.712995   14050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:32:19.713045   14050 main.go:141] libmachine: Decoding PEM data...
	I1030 11:32:19.713057   14050 main.go:141] libmachine: Parsing certificate...
	I1030 11:32:19.713604   14050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:32:19.887343   14050 main.go:141] libmachine: Creating SSH key...
	I1030 11:32:20.233413   14050 main.go:141] libmachine: Creating Disk image...
	I1030 11:32:20.233427   14050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:32:20.233722   14050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:20.244696   14050 main.go:141] libmachine: STDOUT: 
	I1030 11:32:20.244727   14050 main.go:141] libmachine: STDERR: 
	I1030 11:32:20.244789   14050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2 +20000M
	I1030 11:32:20.253631   14050 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:32:20.253653   14050 main.go:141] libmachine: STDERR: 
	I1030 11:32:20.253665   14050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:20.253669   14050 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:32:20.253678   14050 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:32:20.253701   14050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:41:e5:32:64:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:20.255601   14050 main.go:141] libmachine: STDOUT: 
	I1030 11:32:20.255617   14050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:32:20.255630   14050 client.go:171] duration metric: took 542.915958ms to LocalClient.Create
	I1030 11:32:22.257912   14050 start.go:128] duration metric: took 2.589549375s to createHost
	I1030 11:32:22.258001   14050 start.go:83] releasing machines lock for "kubernetes-upgrade-816000", held for 2.589880667s
	W1030 11:32:22.258414   14050 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:32:22.266828   14050 out.go:201] 
	W1030 11:32:22.273923   14050 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:32:22.273965   14050 out.go:270] * 
	* 
	W1030 11:32:22.275993   14050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:32:22.284784   14050 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-816000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-816000: (1.981161792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-816000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-816000 status --format={{.Host}}: exit status 7 (67.844959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.1787845s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-816000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-816000" primary control-plane node in "kubernetes-upgrade-816000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:32:24.385775   14076 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:32:24.385946   14076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:32:24.385949   14076 out.go:358] Setting ErrFile to fd 2...
	I1030 11:32:24.385952   14076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:32:24.386078   14076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:32:24.387494   14076 out.go:352] Setting JSON to false
	I1030 11:32:24.406634   14076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7315,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:32:24.406707   14076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:32:24.411120   14076 out.go:177] * [kubernetes-upgrade-816000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:32:24.419005   14076 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:32:24.419051   14076 notify.go:220] Checking for updates...
	I1030 11:32:24.427134   14076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:32:24.431063   14076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:32:24.434066   14076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:32:24.437119   14076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:32:24.440063   14076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:32:24.443403   14076 config.go:182] Loaded profile config "kubernetes-upgrade-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1030 11:32:24.443681   14076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:32:24.448113   14076 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:32:24.455138   14076 start.go:297] selected driver: qemu2
	I1030 11:32:24.455146   14076 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:32:24.455217   14076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:32:24.457978   14076 cni.go:84] Creating CNI manager for ""
	I1030 11:32:24.458009   14076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:32:24.458041   14076 start.go:340] cluster config:
	{Name:kubernetes-upgrade-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-816000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:32:24.462391   14076 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:32:24.470071   14076 out.go:177] * Starting "kubernetes-upgrade-816000" primary control-plane node in "kubernetes-upgrade-816000" cluster
	I1030 11:32:24.474113   14076 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:32:24.474133   14076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:32:24.474142   14076 cache.go:56] Caching tarball of preloaded images
	I1030 11:32:24.474217   14076 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:32:24.474229   14076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:32:24.474283   14076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kubernetes-upgrade-816000/config.json ...
	I1030 11:32:24.474782   14076 start.go:360] acquireMachinesLock for kubernetes-upgrade-816000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:32:24.474813   14076 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "kubernetes-upgrade-816000"
	I1030 11:32:24.474822   14076 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:32:24.474827   14076 fix.go:54] fixHost starting: 
	I1030 11:32:24.474959   14076 fix.go:112] recreateIfNeeded on kubernetes-upgrade-816000: state=Stopped err=<nil>
	W1030 11:32:24.474966   14076 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:32:24.479093   14076 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-816000" ...
	I1030 11:32:24.486912   14076 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:32:24.486945   14076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:41:e5:32:64:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:24.489199   14076 main.go:141] libmachine: STDOUT: 
	I1030 11:32:24.489220   14076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:32:24.489250   14076 fix.go:56] duration metric: took 14.421708ms for fixHost
	I1030 11:32:24.489265   14076 start.go:83] releasing machines lock for "kubernetes-upgrade-816000", held for 14.437791ms
	W1030 11:32:24.489272   14076 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:32:24.489308   14076 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:32:24.489312   14076 start.go:729] Will try again in 5 seconds ...
	I1030 11:32:29.491323   14076 start.go:360] acquireMachinesLock for kubernetes-upgrade-816000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:32:29.491424   14076 start.go:364] duration metric: took 80.834µs to acquireMachinesLock for "kubernetes-upgrade-816000"
	I1030 11:32:29.491439   14076 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:32:29.491444   14076 fix.go:54] fixHost starting: 
	I1030 11:32:29.491585   14076 fix.go:112] recreateIfNeeded on kubernetes-upgrade-816000: state=Stopped err=<nil>
	W1030 11:32:29.491590   14076 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:32:29.495818   14076 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-816000" ...
	I1030 11:32:29.499740   14076 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:32:29.499806   14076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:41:e5:32:64:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubernetes-upgrade-816000/disk.qcow2
	I1030 11:32:29.502171   14076 main.go:141] libmachine: STDOUT: 
	I1030 11:32:29.502190   14076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:32:29.502215   14076 fix.go:56] duration metric: took 10.770709ms for fixHost
	I1030 11:32:29.502220   14076 start.go:83] releasing machines lock for "kubernetes-upgrade-816000", held for 10.791375ms
	W1030 11:32:29.502275   14076 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:32:29.506805   14076 out.go:201] 
	W1030 11:32:29.510721   14076 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:32:29.510728   14076 out.go:270] * 
	* 
	W1030 11:32:29.511270   14076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:32:29.522719   14076 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-816000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-816000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-816000 version --output=json: exit status 1 (30.500584ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-816000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-30 11:32:29.561948 -0700 PDT m=+948.748459543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-816000 -n kubernetes-upgrade-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-816000 -n kubernetes-upgrade-816000: exit status 7 (36.610042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-816000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-816000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-816000
--- FAIL: TestKubernetesUpgrade (17.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19883
- KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4198318978/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19883
- KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3994318571/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (690.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3995239095 start -p stopped-upgrade-877000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3995239095 start -p stopped-upgrade-877000 --memory=2200 --vm-driver=qemu2 : (39.065444666s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3995239095 -p stopped-upgrade-877000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3995239095 -p stopped-upgrade-877000 stop: (12.120428292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-877000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-877000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10m39.705191417s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-877000" primary control-plane node in "stopped-upgrade-877000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-877000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:33:22.675643   14108 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:33:22.675840   14108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:33:22.675845   14108 out.go:358] Setting ErrFile to fd 2...
	I1030 11:33:22.675848   14108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:33:22.676010   14108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:33:22.677343   14108 out.go:352] Setting JSON to false
	I1030 11:33:22.698098   14108 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7373,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:33:22.698186   14108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:33:22.703421   14108 out.go:177] * [stopped-upgrade-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:33:22.711312   14108 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:33:22.711378   14108 notify.go:220] Checking for updates...
	I1030 11:33:22.718278   14108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:33:22.721347   14108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:33:22.725263   14108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:33:22.728290   14108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:33:22.731392   14108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:33:22.734605   14108 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:33:22.738237   14108 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 11:33:22.741291   14108 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:33:22.745273   14108 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:33:22.752307   14108 start.go:297] selected driver: qemu2
	I1030 11:33:22.752312   14108 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:33:22.752359   14108 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:33:22.754985   14108 cni.go:84] Creating CNI manager for ""
	I1030 11:33:22.755013   14108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:33:22.755031   14108 start.go:340] cluster config:
	{Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:33:22.755082   14108 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:33:22.763301   14108 out.go:177] * Starting "stopped-upgrade-877000" primary control-plane node in "stopped-upgrade-877000" cluster
	I1030 11:33:22.766271   14108 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:33:22.766286   14108 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1030 11:33:22.766294   14108 cache.go:56] Caching tarball of preloaded images
	I1030 11:33:22.766363   14108 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:33:22.766369   14108 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1030 11:33:22.766422   14108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/config.json ...
	I1030 11:33:22.766759   14108 start.go:360] acquireMachinesLock for stopped-upgrade-877000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:33:22.766804   14108 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "stopped-upgrade-877000"
	I1030 11:33:22.766811   14108 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:33:22.766816   14108 fix.go:54] fixHost starting: 
	I1030 11:33:22.766936   14108 fix.go:112] recreateIfNeeded on stopped-upgrade-877000: state=Stopped err=<nil>
	W1030 11:33:22.766943   14108 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:33:22.774273   14108 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-877000" ...
	I1030 11:33:22.778297   14108 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:33:22.778388   14108 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57382-:22,hostfwd=tcp::57383-:2376,hostname=stopped-upgrade-877000 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/disk.qcow2
	I1030 11:33:22.825269   14108 main.go:141] libmachine: STDOUT: 
	I1030 11:33:22.825301   14108 main.go:141] libmachine: STDERR: 
	I1030 11:33:22.825309   14108 main.go:141] libmachine: Waiting for VM to start (ssh -p 57382 docker@127.0.0.1)...
	I1030 11:33:42.944860   14108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/config.json ...
	I1030 11:33:42.945776   14108 machine.go:93] provisionDockerMachine start ...
	I1030 11:33:42.946024   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:42.946598   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:42.946615   14108 main.go:141] libmachine: About to run SSH command:
	hostname
	I1030 11:33:43.031415   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1030 11:33:43.031442   14108 buildroot.go:166] provisioning hostname "stopped-upgrade-877000"
	I1030 11:33:43.031547   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.031753   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.031765   14108 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-877000 && echo "stopped-upgrade-877000" | sudo tee /etc/hostname
	I1030 11:33:43.107352   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-877000
	
	I1030 11:33:43.107460   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.107624   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.107637   14108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-877000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-877000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-877000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 11:33:43.177237   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 11:33:43.177250   14108 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19883-11536/.minikube CaCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19883-11536/.minikube}
	I1030 11:33:43.177266   14108 buildroot.go:174] setting up certificates
	I1030 11:33:43.177271   14108 provision.go:84] configureAuth start
	I1030 11:33:43.177279   14108 provision.go:143] copyHostCerts
	I1030 11:33:43.177344   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem, removing ...
	I1030 11:33:43.177350   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem
	I1030 11:33:43.177462   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.pem (1082 bytes)
	I1030 11:33:43.177635   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem, removing ...
	I1030 11:33:43.177641   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem
	I1030 11:33:43.177694   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/cert.pem (1123 bytes)
	I1030 11:33:43.177853   14108 exec_runner.go:144] found /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem, removing ...
	I1030 11:33:43.177857   14108 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem
	I1030 11:33:43.181017   14108 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19883-11536/.minikube/key.pem (1675 bytes)
	I1030 11:33:43.181170   14108 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-877000 san=[127.0.0.1 localhost minikube stopped-upgrade-877000]
	I1030 11:33:43.241466   14108 provision.go:177] copyRemoteCerts
	I1030 11:33:43.241527   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 11:33:43.241536   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.275100   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1030 11:33:43.281976   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 11:33:43.289419   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 11:33:43.296584   14108 provision.go:87] duration metric: took 119.305209ms to configureAuth
	I1030 11:33:43.296593   14108 buildroot.go:189] setting minikube options for container-runtime
	I1030 11:33:43.296708   14108 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:33:43.296758   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.296850   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.296855   14108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1030 11:33:43.355322   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1030 11:33:43.355331   14108 buildroot.go:70] root file system type: tmpfs
	I1030 11:33:43.355386   14108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1030 11:33:43.355445   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.355559   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.355596   14108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1030 11:33:43.419302   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1030 11:33:43.419371   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.419491   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.419502   14108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1030 11:33:43.815058   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1030 11:33:43.815074   14108 machine.go:96] duration metric: took 869.297125ms to provisionDockerMachine
	I1030 11:33:43.815081   14108 start.go:293] postStartSetup for "stopped-upgrade-877000" (driver="qemu2")
	I1030 11:33:43.815087   14108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 11:33:43.815166   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 11:33:43.815178   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.846791   14108 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 11:33:43.848112   14108 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 11:33:43.848120   14108 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/addons for local assets ...
	I1030 11:33:43.848202   14108 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19883-11536/.minikube/files for local assets ...
	I1030 11:33:43.848301   14108 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem -> 120432.pem in /etc/ssl/certs
	I1030 11:33:43.848423   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 11:33:43.850861   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:33:43.857687   14108 start.go:296] duration metric: took 42.601459ms for postStartSetup
	I1030 11:33:43.857699   14108 fix.go:56] duration metric: took 21.091132333s for fixHost
	I1030 11:33:43.857744   14108 main.go:141] libmachine: Using SSH client type: native
	I1030 11:33:43.857847   14108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a025f0] 0x100a04e30 <nil>  [] 0s} localhost 57382 <nil> <nil>}
	I1030 11:33:43.857852   14108 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 11:33:43.917028   14108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730313224.270688546
	
	I1030 11:33:43.917036   14108 fix.go:216] guest clock: 1730313224.270688546
	I1030 11:33:43.917040   14108 fix.go:229] Guest: 2024-10-30 11:33:44.270688546 -0700 PDT Remote: 2024-10-30 11:33:43.857701 -0700 PDT m=+21.216089376 (delta=412.987546ms)
	I1030 11:33:43.917051   14108 fix.go:200] guest clock delta is within tolerance: 412.987546ms
	I1030 11:33:43.917053   14108 start.go:83] releasing machines lock for "stopped-upgrade-877000", held for 21.15049425s
	I1030 11:33:43.917129   14108 ssh_runner.go:195] Run: cat /version.json
	I1030 11:33:43.917139   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:33:43.917130   14108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 11:33:43.917180   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	W1030 11:33:43.917640   14108 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:57533->127.0.0.1:57382: write: broken pipe
	I1030 11:33:43.917661   14108 retry.go:31] will retry after 226.223364ms: ssh: handshake failed: write tcp 127.0.0.1:57533->127.0.0.1:57382: write: broken pipe
	W1030 11:33:43.946493   14108 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1030 11:33:43.946538   14108 ssh_runner.go:195] Run: systemctl --version
	I1030 11:33:43.948293   14108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 11:33:43.949865   14108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 11:33:43.949901   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1030 11:33:43.953260   14108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1030 11:33:43.957866   14108 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 11:33:43.957876   14108 start.go:495] detecting cgroup driver to use...
	I1030 11:33:43.957954   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:33:43.964913   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1030 11:33:43.968211   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1030 11:33:43.970998   14108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1030 11:33:43.971027   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1030 11:33:43.973929   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:33:43.977276   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1030 11:33:43.980697   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1030 11:33:43.983633   14108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 11:33:43.986501   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1030 11:33:43.989395   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1030 11:33:43.992593   14108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1030 11:33:43.995394   14108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 11:33:43.997939   14108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 11:33:44.001087   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:33:44.073345   14108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1030 11:33:44.079622   14108 start.go:495] detecting cgroup driver to use...
	I1030 11:33:44.079701   14108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1030 11:33:44.085184   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:33:44.090492   14108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 11:33:44.100400   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 11:33:44.104636   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1030 11:33:44.109241   14108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1030 11:33:44.152002   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1030 11:33:44.156802   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 11:33:44.162365   14108 ssh_runner.go:195] Run: which cri-dockerd
	I1030 11:33:44.163636   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1030 11:33:44.166127   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1030 11:33:44.170967   14108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1030 11:33:44.253112   14108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1030 11:33:44.326795   14108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1030 11:33:44.326856   14108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1030 11:33:44.331935   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:33:44.407980   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:33:44.515517   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1030 11:33:44.520353   14108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1030 11:35:43.011379   14108 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m58.4923655s)
	I1030 11:35:43.011540   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:35:43.021975   14108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1030 11:35:43.094762   14108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1030 11:35:43.168936   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:43.238342   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1030 11:35:43.245152   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1030 11:35:43.249892   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:43.326353   14108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1030 11:35:43.365515   14108 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1030 11:35:43.365611   14108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1030 11:35:43.368622   14108 start.go:563] Will wait 60s for crictl version
	I1030 11:35:43.368685   14108 ssh_runner.go:195] Run: which crictl
	I1030 11:35:43.370168   14108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 11:35:43.385774   14108 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1030 11:35:43.385858   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:35:43.403285   14108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1030 11:35:43.425582   14108 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1030 11:35:43.425672   14108 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1030 11:35:43.427142   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 11:35:43.431306   14108 kubeadm.go:883] updating cluster {Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1030 11:35:43.431352   14108 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1030 11:35:43.431404   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:35:43.442137   14108 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:35:43.442157   14108 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:35:43.442217   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:35:43.445454   14108 ssh_runner.go:195] Run: which lz4
	I1030 11:35:43.446668   14108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1030 11:35:43.447864   14108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 11:35:43.447875   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1030 11:35:44.448087   14108 docker.go:653] duration metric: took 1.001469833s to copy over tarball
	I1030 11:35:44.448175   14108 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 11:35:45.636630   14108 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188454792s)
	I1030 11:35:45.636644   14108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 11:35:45.652449   14108 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1030 11:35:45.655787   14108 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1030 11:35:45.661202   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:45.738003   14108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1030 11:35:47.286314   14108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548312458s)
	I1030 11:35:47.286426   14108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1030 11:35:47.301434   14108 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1030 11:35:47.301443   14108 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1030 11:35:47.301447   14108 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 11:35:47.306048   14108 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:47.308009   14108 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.310036   14108 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.310438   14108 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:47.311961   14108 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.312105   14108 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.313748   14108 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.313804   14108 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.315053   14108 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1030 11:35:47.315156   14108 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.316133   14108 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.316741   14108 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:47.317284   14108 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:47.317654   14108 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1030 11:35:47.318930   14108 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:47.319485   14108 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:47.878942   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.889928   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.890306   14108 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1030 11:35:47.890333   14108 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.890364   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1030 11:35:47.907355   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1030 11:35:47.907484   14108 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1030 11:35:47.907503   14108 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.907552   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1030 11:35:47.918592   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1030 11:35:47.935700   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.947568   14108 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1030 11:35:47.947597   14108 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.947635   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1030 11:35:47.960483   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1030 11:35:47.967891   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.978857   14108 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1030 11:35:47.978882   14108 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.978963   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1030 11:35:47.993965   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1030 11:35:48.033135   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1030 11:35:48.044167   14108 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1030 11:35:48.044187   14108 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1030 11:35:48.044255   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1030 11:35:48.054386   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1030 11:35:48.054525   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1030 11:35:48.056183   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1030 11:35:48.056195   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1030 11:35:48.064011   14108 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1030 11:35:48.064022   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1030 11:35:48.093327   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1030 11:35:48.138526   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.149260   14108 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1030 11:35:48.149287   14108 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.149363   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1030 11:35:48.160706   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1030 11:35:48.190082   14108 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1030 11:35:48.190260   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.200560   14108 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1030 11:35:48.200582   14108 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.200650   14108 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1030 11:35:48.210660   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1030 11:35:48.210808   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:35:48.212213   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1030 11:35:48.212223   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1030 11:35:48.253046   14108 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1030 11:35:48.253059   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1030 11:35:48.292956   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1030 11:35:48.320101   14108 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1030 11:35:48.320214   14108 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.331090   14108 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1030 11:35:48.331113   14108 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.331177   14108 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:35:48.351253   14108 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1030 11:35:48.351400   14108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:35:48.352793   14108 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1030 11:35:48.352805   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1030 11:35:48.387602   14108 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1030 11:35:48.387618   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1030 11:35:48.620054   14108 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1030 11:35:48.620092   14108 cache_images.go:92] duration metric: took 1.318653834s to LoadCachedImages
	W1030 11:35:48.620128   14108 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1030 11:35:48.620139   14108 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1030 11:35:48.620198   14108 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-877000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1030 11:35:48.620285   14108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1030 11:35:48.633959   14108 cni.go:84] Creating CNI manager for ""
	I1030 11:35:48.633971   14108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:35:48.633977   14108 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1030 11:35:48.633988   14108 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-877000 NodeName:stopped-upgrade-877000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 11:35:48.634064   14108 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-877000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 11:35:48.634122   14108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1030 11:35:48.637091   14108 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 11:35:48.637132   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 11:35:48.639778   14108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1030 11:35:48.644785   14108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 11:35:48.649563   14108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1030 11:35:48.654775   14108 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1030 11:35:48.655895   14108 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 11:35:48.659962   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:35:48.736241   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:35:48.742861   14108 certs.go:68] Setting up /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000 for IP: 10.0.2.15
	I1030 11:35:48.742871   14108 certs.go:194] generating shared ca certs ...
	I1030 11:35:48.742879   14108 certs.go:226] acquiring lock for ca certs: {Name:mke98b939cb7b412ec11c6499518b74392aa286f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.743093   14108 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key
	I1030 11:35:48.743859   14108 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key
	I1030 11:35:48.743870   14108 certs.go:256] generating profile certs ...
	I1030 11:35:48.744127   14108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.key
	I1030 11:35:48.744146   14108 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3
	I1030 11:35:48.744160   14108 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1030 11:35:48.860024   14108 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 ...
	I1030 11:35:48.860039   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3: {Name:mk8dc9c9d5df0b51eafee344383b82637dfd5adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.860450   14108 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3 ...
	I1030 11:35:48.860458   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3: {Name:mkceb498d88f05e1cbeff333e74974ee13f252ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:48.860643   14108 certs.go:381] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt.3dace6f3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt
	I1030 11:35:48.862777   14108 certs.go:385] copying /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key.3dace6f3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key
	I1030 11:35:48.863134   14108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.key
	I1030 11:35:48.863300   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem (1338 bytes)
	W1030 11:35:48.863523   14108 certs.go:480] ignoring /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043_empty.pem, impossibly tiny 0 bytes
	I1030 11:35:48.863528   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca-key.pem (1675 bytes)
	I1030 11:35:48.863561   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem (1082 bytes)
	I1030 11:35:48.863597   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem (1123 bytes)
	I1030 11:35:48.863628   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/key.pem (1675 bytes)
	I1030 11:35:48.863694   14108 certs.go:484] found cert: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem (1708 bytes)
	I1030 11:35:48.864051   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 11:35:48.871487   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 11:35:48.878847   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 11:35:48.885796   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1030 11:35:48.892579   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1030 11:35:48.899575   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1030 11:35:48.907211   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 11:35:48.914929   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 11:35:48.922299   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 11:35:48.929110   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/12043.pem --> /usr/share/ca-certificates/12043.pem (1338 bytes)
	I1030 11:35:48.935768   14108 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/ssl/certs/120432.pem --> /usr/share/ca-certificates/120432.pem (1708 bytes)
	I1030 11:35:48.943263   14108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1030 11:35:48.948828   14108 ssh_runner.go:195] Run: openssl version
	I1030 11:35:48.950823   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 11:35:48.953947   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.955424   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 30 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.955449   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 11:35:48.957300   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 11:35:48.960147   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12043.pem && ln -fs /usr/share/ca-certificates/12043.pem /etc/ssl/certs/12043.pem"
	I1030 11:35:48.963562   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.965317   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 30 18:17 /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.965346   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12043.pem
	I1030 11:35:48.967083   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12043.pem /etc/ssl/certs/51391683.0"
	I1030 11:35:48.970181   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120432.pem && ln -fs /usr/share/ca-certificates/120432.pem /etc/ssl/certs/120432.pem"
	I1030 11:35:48.973157   14108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.974601   14108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 30 18:17 /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.974624   14108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120432.pem
	I1030 11:35:48.976499   14108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/120432.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 11:35:48.979837   14108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1030 11:35:48.981222   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 11:35:48.983329   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 11:35:48.985274   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 11:35:48.987099   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 11:35:48.988854   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 11:35:48.990580   14108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 11:35:48.992465   14108 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57416 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1030 11:35:48.992545   14108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:35:49.006363   14108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 11:35:49.009747   14108 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1030 11:35:49.009759   14108 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1030 11:35:49.009794   14108 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 11:35:49.012652   14108 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 11:35:49.012963   14108 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-877000" does not appear in /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:35:49.013086   14108 kubeconfig.go:62] /Users/jenkins/minikube-integration/19883-11536/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-877000" cluster setting kubeconfig missing "stopped-upgrade-877000" context setting]
	I1030 11:35:49.013286   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:35:49.013722   14108 kapi.go:59] client config for stopped-upgrade-877000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10245e7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:35:49.014213   14108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 11:35:49.016989   14108 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-877000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1030 11:35:49.016994   14108 kubeadm.go:1160] stopping kube-system containers ...
	I1030 11:35:49.017046   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1030 11:35:49.027974   14108 docker.go:483] Stopping containers: [7b1ffc1f1881 d6a9e90789a1 74c76d98b1d5 9e4f9a6580ee ea0de2881762 4e35759a58bf 647d7c652201 f0309de3b673]
	I1030 11:35:49.028051   14108 ssh_runner.go:195] Run: docker stop 7b1ffc1f1881 d6a9e90789a1 74c76d98b1d5 9e4f9a6580ee ea0de2881762 4e35759a58bf 647d7c652201 f0309de3b673
	I1030 11:35:49.038979   14108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 11:35:49.044998   14108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:35:49.047892   14108 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 11:35:49.047902   14108 kubeadm.go:157] found existing configuration files:
	
	I1030 11:35:49.047932   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf
	I1030 11:35:49.050823   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 11:35:49.050855   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:35:49.053578   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf
	I1030 11:35:49.056138   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 11:35:49.056166   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:35:49.059259   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf
	I1030 11:35:49.061983   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 11:35:49.062012   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:35:49.064601   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf
	I1030 11:35:49.067548   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 11:35:49.067575   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:35:49.070760   14108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:35:49.073605   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.098100   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.620863   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.745773   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.780996   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 11:35:49.803810   14108 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:35:49.803911   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.305059   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.805945   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:35:50.810448   14108 api_server.go:72] duration metric: took 1.006649417s to wait for apiserver process to appear ...
	I1030 11:35:50.810458   14108 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:35:50.810474   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:35:55.812502   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:35:55.812559   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:00.812761   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:00.812791   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:05.813066   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:05.813088   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:10.813509   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:10.813574   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:15.814208   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:15.814270   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:20.815064   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:20.815154   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:25.816711   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:25.816758   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:30.818218   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:30.818236   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:35.820247   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:35.820286   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:40.822547   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:40.822585   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:45.824863   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:45.824888   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:50.827014   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:50.827260   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:50.843414   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:36:50.843513   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:50.855807   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:36:50.855891   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:50.866615   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:36:50.866697   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:50.876914   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:36:50.876995   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:50.887399   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:36:50.887471   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:50.898200   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:36:50.898294   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:50.908440   14108 logs.go:282] 0 containers: []
	W1030 11:36:50.908461   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:50.908532   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:50.918869   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:36:50.918886   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:36:50.918891   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:36:50.932030   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:36:50.932040   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:36:50.947700   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:36:50.947710   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:36:50.962061   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:36:50.962074   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:36:50.977482   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:36:50.977494   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:36:50.988978   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:36:50.988989   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:36:51.004122   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:36:51.004134   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:36:51.015985   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:51.015997   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:51.043002   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:51.043011   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:51.047706   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:36:51.047715   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:36:51.061534   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:36:51.061544   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:36:51.088896   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:36:51.088915   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:36:51.103805   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:36:51.103815   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:36:51.116780   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:51.116800   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:51.156051   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:51.156061   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:51.258829   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:36:51.258841   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:36:51.276229   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:36:51.276241   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:36:53.793606   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:36:58.795867   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:36:58.796119   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:36:58.820475   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:36:58.820607   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:36:58.837136   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:36:58.837231   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:36:58.851917   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:36:58.851998   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:36:58.862958   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:36:58.863035   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:36:58.873425   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:36:58.873502   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:36:58.884239   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:36:58.884321   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:36:58.894615   14108 logs.go:282] 0 containers: []
	W1030 11:36:58.894628   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:36:58.894712   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:36:58.905239   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:36:58.905258   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:36:58.905263   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:36:58.920836   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:36:58.920848   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:36:58.932715   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:36:58.932727   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:36:58.956839   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:36:58.956849   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:36:58.961105   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:36:58.961111   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:36:58.975194   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:36:58.975204   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:36:58.986781   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:36:58.986792   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:36:58.998434   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:36:58.998447   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:36:59.009465   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:36:59.009475   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:36:59.048413   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:36:59.048424   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:36:59.084281   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:36:59.084291   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:36:59.109420   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:36:59.109433   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:36:59.123831   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:36:59.123844   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:36:59.135464   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:36:59.135480   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:36:59.150560   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:36:59.150571   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:36:59.168095   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:36:59.168105   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:36:59.182151   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:36:59.182161   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:01.699716   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:06.701978   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:06.702260   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:06.724670   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:06.724806   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:06.740015   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:06.740099   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:06.752230   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:06.752307   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:06.763080   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:06.763167   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:06.773808   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:06.773893   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:06.786369   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:06.786450   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:06.796948   14108 logs.go:282] 0 containers: []
	W1030 11:37:06.796960   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:06.797057   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:06.807590   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:06.807609   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:06.807616   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:06.822113   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:06.822123   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:06.836504   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:06.836515   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:06.851634   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:06.851646   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:06.892725   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:06.892737   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:06.904591   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:06.904605   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:06.919646   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:06.919657   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:06.945335   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:06.945346   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:06.959095   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:06.959105   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:06.970892   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:06.970902   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:06.986298   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:06.986312   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:07.011264   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:07.011272   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:07.015489   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:07.015498   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:07.052041   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:07.052055   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:07.063299   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:07.063311   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:07.074837   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:07.074846   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:07.096317   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:07.096327   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:09.612441   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:14.614228   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:14.614365   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:14.629952   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:14.630038   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:14.640542   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:14.640610   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:14.651217   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:14.651298   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:14.661282   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:14.661367   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:14.671659   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:14.671726   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:14.682157   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:14.682228   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:14.692135   14108 logs.go:282] 0 containers: []
	W1030 11:37:14.692148   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:14.692207   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:14.703007   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:14.703025   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:14.703031   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:14.717078   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:14.717088   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:14.730850   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:14.730862   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:14.742773   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:14.742784   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:14.760583   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:14.760597   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:14.786528   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:14.786540   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:14.823879   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:14.823889   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:14.844560   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:14.844571   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:14.855893   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:14.855904   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:14.867457   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:14.867467   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:14.881901   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:14.881912   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:14.894052   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:14.894062   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:14.898167   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:14.898176   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:14.934237   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:14.934249   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:14.948618   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:14.948629   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:14.974590   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:14.974604   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:14.986551   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:14.986562   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:17.500619   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:22.502978   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:22.503246   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:22.536575   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:22.536681   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:22.551134   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:22.551212   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:22.563680   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:22.563770   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:22.574626   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:22.574708   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:22.585537   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:22.585634   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:22.597154   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:22.597235   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:22.608215   14108 logs.go:282] 0 containers: []
	W1030 11:37:22.608227   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:22.608297   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:22.618539   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:22.618556   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:22.618562   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:22.631395   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:22.631408   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:22.668539   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:22.668548   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:22.710763   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:22.710774   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:22.735692   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:22.735710   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:22.749920   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:22.749930   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:22.762797   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:22.762809   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:22.780600   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:22.780616   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:22.785042   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:22.785054   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:22.804775   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:22.804786   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:22.819057   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:22.819067   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:22.830042   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:22.830053   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:22.841398   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:22.841413   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:22.860182   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:22.860193   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:22.874326   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:22.874340   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:22.885706   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:22.885718   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:22.906168   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:22.906182   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:25.432711   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:30.435091   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:30.435538   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:30.467569   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:30.467716   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:30.486548   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:30.486663   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:30.501385   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:30.501478   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:30.513330   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:30.513413   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:30.524663   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:30.524737   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:30.539458   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:30.539537   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:30.553881   14108 logs.go:282] 0 containers: []
	W1030 11:37:30.553892   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:30.553959   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:30.564320   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:30.564337   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:30.564342   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:30.607491   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:30.607506   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:30.622217   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:30.622228   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:30.640450   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:30.640461   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:30.652641   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:30.652654   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:30.667524   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:30.667536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:30.679118   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:30.679130   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:30.683797   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:30.683805   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:30.710043   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:30.710058   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:30.723662   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:30.723672   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:30.735043   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:30.735054   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:30.751646   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:30.751658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:30.787067   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:30.787077   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:30.799140   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:30.799151   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:30.813862   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:30.813874   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:30.832492   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:30.832505   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:30.845174   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:30.845185   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:33.373748   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:38.376449   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:38.376990   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:38.417757   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:38.417923   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:38.439859   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:38.439989   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:38.456445   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:38.456542   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:38.468742   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:38.468827   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:38.479821   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:38.479906   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:38.491270   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:38.491348   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:38.503061   14108 logs.go:282] 0 containers: []
	W1030 11:37:38.503074   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:38.503146   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:38.514093   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:38.514112   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:38.514118   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:38.553359   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:38.553373   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:38.568847   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:38.568857   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:38.581160   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:38.581171   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:38.598089   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:38.598101   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:38.613085   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:38.613098   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:38.637064   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:38.637075   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:38.651602   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:38.651612   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:38.663281   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:38.663292   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:38.681285   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:38.681299   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:38.719488   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:38.719499   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:38.733574   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:38.733586   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:38.748143   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:38.748153   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:38.759317   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:38.759330   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:38.771208   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:38.771219   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:38.776092   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:38.776099   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:38.801670   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:38.801682   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:41.327522   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:46.330213   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:46.330486   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:46.353715   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:46.353829   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:46.369124   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:46.369218   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:46.381897   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:46.381990   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:46.392944   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:46.393021   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:46.403520   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:46.403598   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:46.425522   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:46.425603   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:46.436472   14108 logs.go:282] 0 containers: []
	W1030 11:37:46.436484   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:46.436549   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:46.447300   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:46.447317   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:46.447323   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:46.481538   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:46.481552   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:46.495645   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:46.495658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:46.508085   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:46.508099   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:46.520850   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:46.520865   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:46.538087   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:46.538097   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:46.552545   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:46.552557   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:46.564498   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:46.564509   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:46.579262   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:46.579273   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:46.603158   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:46.603165   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:46.640297   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:46.640306   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:46.654738   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:46.654749   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:46.679744   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:46.679756   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:46.694840   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:46.694852   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:46.706152   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:46.706164   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:46.710332   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:46.710341   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:46.722419   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:46.722429   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:49.238512   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:37:54.240827   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:37:54.240954   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:37:54.254569   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:37:54.254661   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:37:54.266479   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:37:54.266564   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:37:54.276865   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:37:54.276944   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:37:54.287938   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:37:54.288019   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:37:54.299212   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:37:54.299289   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:37:54.309992   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:37:54.310065   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:37:54.326312   14108 logs.go:282] 0 containers: []
	W1030 11:37:54.326325   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:37:54.326401   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:37:54.341284   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:37:54.341302   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:37:54.341308   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:37:54.345522   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:37:54.345531   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:37:54.359611   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:37:54.359621   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:37:54.371545   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:37:54.371557   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:37:54.396086   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:37:54.396095   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:37:54.409900   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:37:54.409910   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:37:54.424705   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:37:54.424714   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:37:54.435873   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:37:54.435882   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:37:54.450815   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:37:54.450826   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:37:54.464309   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:37:54.464321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:37:54.481377   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:37:54.481387   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:37:54.495164   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:37:54.495175   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:37:54.533407   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:37:54.533418   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:37:54.567957   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:37:54.567968   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:37:54.593625   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:37:54.593642   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:37:54.605308   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:37:54.605322   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:37:54.618154   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:37:54.618168   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:37:57.132426   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:02.135081   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:02.135210   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:02.154060   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:02.154160   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:02.165036   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:02.165119   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:02.175404   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:02.175493   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:02.186807   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:02.186887   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:02.197569   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:02.197643   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:02.208146   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:02.208209   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:02.219076   14108 logs.go:282] 0 containers: []
	W1030 11:38:02.219088   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:02.219151   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:02.229437   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:02.229455   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:02.229461   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:02.249719   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:02.249732   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:02.286102   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:02.286110   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:02.297310   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:02.297322   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:02.308913   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:02.308923   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:02.326157   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:02.326170   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:02.337998   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:02.338011   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:02.372424   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:02.372435   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:02.386789   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:02.386804   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:02.398503   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:02.398517   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:02.410391   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:02.410400   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:02.414488   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:02.414495   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:02.429109   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:02.429119   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:02.454428   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:02.454439   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:02.468162   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:02.468172   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:02.482640   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:02.482650   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:02.494615   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:02.494625   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:05.019103   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:10.021505   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:10.021600   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:10.033913   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:10.033996   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:10.044671   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:10.044751   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:10.055684   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:10.055763   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:10.066256   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:10.066334   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:10.076705   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:10.076780   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:10.087044   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:10.087125   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:10.097260   14108 logs.go:282] 0 containers: []
	W1030 11:38:10.097273   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:10.097338   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:10.108884   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:10.108902   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:10.108908   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:10.113273   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:10.113282   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:10.148132   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:10.148147   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:10.162776   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:10.162788   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:10.202009   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:10.202020   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:10.230914   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:10.230925   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:10.249692   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:10.249703   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:10.261535   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:10.261547   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:10.273238   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:10.273251   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:10.298313   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:10.298321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:10.313844   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:10.313854   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:10.327804   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:10.327815   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:10.339703   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:10.339714   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:10.357113   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:10.357123   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:10.371153   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:10.371165   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:10.383775   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:10.383785   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:10.395534   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:10.395545   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:12.908110   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:17.910501   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:17.910753   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:17.938216   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:17.938335   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:17.955821   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:17.955913   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:17.968130   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:17.968217   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:17.979353   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:17.979430   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:17.989797   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:17.989871   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:18.000242   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:18.000321   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:18.018071   14108 logs.go:282] 0 containers: []
	W1030 11:38:18.018082   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:18.018148   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:18.032821   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:18.032839   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:18.032845   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:18.072099   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:18.072109   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:18.076525   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:18.076533   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:18.112520   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:18.112532   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:18.137537   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:18.137548   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:18.149397   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:18.149409   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:18.163562   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:18.163574   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:18.188478   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:18.188489   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:18.202787   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:18.202799   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:18.214278   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:18.214292   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:18.228515   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:18.228529   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:18.239490   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:18.239500   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:18.253336   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:18.253348   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:18.265174   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:18.265186   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:18.276904   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:18.276915   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:18.299119   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:18.299133   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:18.313381   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:18.313392   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:20.827313   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:25.829604   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:25.830202   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:25.876915   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:25.877065   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:25.898508   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:25.898606   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:25.912772   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:25.912865   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:25.924100   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:25.924181   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:25.934543   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:25.934624   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:25.945380   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:25.945450   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:25.955684   14108 logs.go:282] 0 containers: []
	W1030 11:38:25.955698   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:25.955765   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:25.965936   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:25.965956   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:25.965962   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:26.003716   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:26.003725   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:26.020748   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:26.020760   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:26.038465   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:26.038478   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:26.049911   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:26.049924   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:26.065742   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:26.065752   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:26.080556   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:26.080567   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:26.093525   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:26.093536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:26.121341   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:26.121354   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:26.135976   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:26.135987   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:26.160215   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:26.160224   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:26.171964   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:26.171975   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:26.184169   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:26.184179   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:26.223394   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:26.223404   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:26.228297   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:26.228305   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:26.253429   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:26.253438   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:26.269209   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:26.269221   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:28.790353   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:33.791469   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:33.791801   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:33.819613   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:33.819765   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:33.837098   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:33.837195   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:33.851073   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:33.851159   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:33.862782   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:33.862868   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:33.876756   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:33.876836   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:33.887730   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:33.887812   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:33.897726   14108 logs.go:282] 0 containers: []
	W1030 11:38:33.897742   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:33.897809   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:33.908865   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:33.908883   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:33.908888   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:33.913221   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:33.913227   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:33.925204   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:33.925216   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:33.948614   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:33.948623   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:33.961118   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:33.961133   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:33.997862   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:33.997873   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:34.012612   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:34.012626   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:34.027432   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:34.027446   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:34.039559   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:34.039573   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:34.056439   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:34.056453   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:34.074260   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:34.074271   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:34.113524   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:34.113536   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:34.127793   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:34.127804   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:34.141909   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:34.141919   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:34.154585   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:34.154599   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:34.169689   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:34.169699   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:34.217510   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:34.217520   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:36.731846   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:41.733989   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:41.734259   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:41.760028   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:41.760140   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:41.774468   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:41.774557   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:41.791110   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:41.791188   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:41.802227   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:41.802313   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:41.813141   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:41.813217   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:41.823462   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:41.823539   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:41.834504   14108 logs.go:282] 0 containers: []
	W1030 11:38:41.834517   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:41.834582   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:41.847012   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:41.847029   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:41.847035   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:41.851840   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:41.851846   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:41.866148   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:41.866163   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:41.883499   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:41.883509   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:41.897808   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:41.897818   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:41.935651   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:41.935658   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:41.971632   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:41.971641   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:41.985733   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:41.985744   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:41.999840   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:41.999849   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:42.012302   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:42.012314   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:42.025871   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:42.025885   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:42.050726   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:42.050747   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:42.062951   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:42.062961   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:42.074563   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:42.074575   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:42.086621   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:42.086630   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:42.101164   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:42.101175   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:42.132480   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:42.132490   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:44.653711   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:49.655952   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:49.656161   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:49.672209   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:49.672301   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:49.684512   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:49.684599   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:49.695688   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:49.695760   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:49.718012   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:49.718092   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:49.728258   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:49.728339   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:49.738233   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:49.738321   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:49.748576   14108 logs.go:282] 0 containers: []
	W1030 11:38:49.748587   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:49.748656   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:49.759353   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:49.759371   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:49.759378   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:49.777138   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:49.777152   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:49.792871   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:49.792882   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:38:49.836475   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:49.836490   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:49.865723   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:49.865737   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:49.879396   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:49.879410   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:49.893837   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:49.893850   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:49.905623   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:49.905636   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:49.917958   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:49.917969   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:49.955400   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:49.955408   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:49.967162   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:49.967177   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:49.981380   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:49.981391   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:49.985614   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:49.985623   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:49.999953   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:49.999966   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:50.011532   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:50.011544   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:50.026960   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:50.026974   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:50.052272   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:50.052284   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:52.566293   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:38:57.568469   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:38:57.568735   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:38:57.587598   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:38:57.587708   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:38:57.600938   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:38:57.601026   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:38:57.615344   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:38:57.615423   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:38:57.626191   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:38:57.626271   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:38:57.637288   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:38:57.637363   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:38:57.651706   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:38:57.651784   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:38:57.661926   14108 logs.go:282] 0 containers: []
	W1030 11:38:57.661938   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:38:57.662003   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:38:57.672069   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:38:57.672087   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:38:57.672093   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:38:57.685970   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:38:57.685980   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:38:57.697984   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:38:57.697998   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:38:57.712379   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:38:57.712392   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:38:57.727148   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:38:57.727157   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:38:57.738615   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:38:57.738629   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:38:57.743226   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:38:57.743234   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:38:57.757610   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:38:57.757622   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:38:57.769635   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:38:57.769645   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:38:57.792318   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:38:57.792326   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:38:57.804266   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:38:57.804276   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:38:57.841312   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:38:57.841321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:38:57.866434   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:38:57.866445   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:38:57.883633   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:38:57.883644   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:38:57.897653   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:38:57.897665   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:38:57.909740   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:38:57.909754   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:38:57.922317   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:38:57.922328   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:00.459576   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:05.461812   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:05.461992   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:05.473694   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:05.473780   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:05.484536   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:05.484625   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:05.495440   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:05.495519   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:05.506129   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:05.506207   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:05.516691   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:05.516769   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:05.527142   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:05.527219   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:05.538082   14108 logs.go:282] 0 containers: []
	W1030 11:39:05.538093   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:05.538174   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:05.548909   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:05.548929   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:05.548935   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:05.564150   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:05.564161   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:05.577038   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:05.577049   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:05.617085   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:05.617096   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:05.631867   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:05.631877   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:05.643709   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:05.643721   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:05.681431   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:05.681441   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:05.706387   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:05.706401   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:05.720538   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:05.720553   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:05.732138   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:05.732151   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:05.751160   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:05.751176   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:05.765905   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:05.765918   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:05.777291   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:05.777306   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:05.781651   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:05.781658   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:05.796721   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:05.796737   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:05.808678   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:05.808692   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:05.820193   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:05.820205   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:08.347434   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:13.350114   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:13.350696   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:13.419752   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:13.419847   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:13.463764   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:13.463851   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:13.475332   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:13.475415   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:13.485740   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:13.485834   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:13.496432   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:13.496503   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:13.509029   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:13.509108   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:13.519497   14108 logs.go:282] 0 containers: []
	W1030 11:39:13.519508   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:13.519570   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:13.530414   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:13.530430   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:13.530436   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:13.541751   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:13.541760   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:13.553736   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:13.553745   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:13.567766   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:13.567780   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:13.579432   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:13.579446   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:13.591266   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:13.591280   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:13.614003   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:13.614014   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:13.652160   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:13.652166   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:13.656800   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:13.656806   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:13.694438   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:13.694450   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:13.708447   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:13.708460   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:13.720049   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:13.720058   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:13.732679   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:13.732688   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:13.757457   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:13.757467   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:13.772613   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:13.772626   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:13.787143   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:13.787155   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:13.801904   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:13.801915   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:16.321035   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:21.323745   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:21.324277   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:21.368248   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:21.368405   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:21.388263   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:21.388374   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:21.403232   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:21.403318   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:21.419183   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:21.419264   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:21.429609   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:21.429689   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:21.442974   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:21.443061   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:21.455657   14108 logs.go:282] 0 containers: []
	W1030 11:39:21.455669   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:21.455729   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:21.466174   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:21.466191   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:21.466197   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:21.502975   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:21.502984   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:21.516926   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:21.516938   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:21.532133   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:21.532146   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:21.544048   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:21.544060   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:21.555811   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:21.555823   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:21.559922   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:21.559932   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:21.594647   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:21.594660   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:21.609384   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:21.609398   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:21.635253   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:21.635267   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:21.648230   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:21.648244   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:21.660687   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:21.660698   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:21.676013   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:21.676026   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:21.687795   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:21.687805   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:21.699889   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:21.699902   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:21.717439   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:21.717451   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:21.731456   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:21.731466   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:24.257403   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:29.260185   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:29.260492   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:29.286692   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:29.286785   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:29.304307   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:29.304385   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:29.318000   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:29.318082   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:29.330130   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:29.330218   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:29.342349   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:29.342417   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:29.353656   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:29.353719   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:29.364564   14108 logs.go:282] 0 containers: []
	W1030 11:39:29.364576   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:29.364635   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:29.375799   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:29.375817   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:29.375822   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:29.390572   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:29.390581   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:29.402299   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:29.402308   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:29.413501   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:29.413511   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:29.436617   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:29.436627   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:29.474463   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:29.474474   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:29.490078   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:29.490091   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:29.501700   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:29.501709   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:29.513107   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:29.513119   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:29.533843   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:29.533855   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:29.548007   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:29.548017   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:29.565974   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:29.565987   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:29.599231   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:29.599242   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:29.620455   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:29.620464   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:29.631969   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:29.631979   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:29.636459   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:29.636468   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:29.647963   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:29.647976   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:32.184573   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:37.187278   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:37.187908   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:37.231260   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:37.231404   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:37.251550   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:37.251667   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:37.268211   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:37.268288   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:37.280894   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:37.280971   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:37.291654   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:37.291726   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:37.302588   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:37.302665   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:37.312367   14108 logs.go:282] 0 containers: []
	W1030 11:39:37.312385   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:37.312453   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:37.323392   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:37.323410   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:37.323416   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:37.328139   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:37.328147   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:37.353295   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:37.353307   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:37.368134   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:37.368148   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:37.379760   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:37.379770   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:37.401778   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:37.401784   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:37.415752   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:37.415762   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:37.430063   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:37.430073   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:37.447615   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:37.447625   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:37.484727   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:37.484734   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:37.498127   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:37.498140   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:37.509901   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:37.509912   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:37.530845   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:37.530858   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:37.547756   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:37.547769   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:37.590972   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:37.590986   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:37.604122   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:37.604135   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:37.618965   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:37.618979   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:40.133568   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:45.135746   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:45.136201   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:39:45.168212   14108 logs.go:282] 2 containers: [4508068f4ca1 9e4f9a6580ee]
	I1030 11:39:45.168361   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:39:45.193876   14108 logs.go:282] 2 containers: [e395a4682cc5 d6a9e90789a1]
	I1030 11:39:45.193973   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:39:45.206653   14108 logs.go:282] 1 containers: [8b8572d0090f]
	I1030 11:39:45.206732   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:39:45.219249   14108 logs.go:282] 2 containers: [7488ffe8a526 7b1ffc1f1881]
	I1030 11:39:45.219333   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:39:45.229715   14108 logs.go:282] 1 containers: [68856c6a0b81]
	I1030 11:39:45.229791   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:39:45.241562   14108 logs.go:282] 2 containers: [3dd43669ef05 74c76d98b1d5]
	I1030 11:39:45.241641   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:39:45.251890   14108 logs.go:282] 0 containers: []
	W1030 11:39:45.251906   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:39:45.251964   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:39:45.262298   14108 logs.go:282] 2 containers: [130b7234929f 82d3dbe6a441]
	I1030 11:39:45.262316   14108 logs.go:123] Gathering logs for kube-controller-manager [3dd43669ef05] ...
	I1030 11:39:45.262322   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dd43669ef05"
	I1030 11:39:45.279987   14108 logs.go:123] Gathering logs for kube-proxy [68856c6a0b81] ...
	I1030 11:39:45.279998   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68856c6a0b81"
	I1030 11:39:45.292906   14108 logs.go:123] Gathering logs for kube-apiserver [4508068f4ca1] ...
	I1030 11:39:45.292916   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4508068f4ca1"
	I1030 11:39:45.307242   14108 logs.go:123] Gathering logs for kube-scheduler [7b1ffc1f1881] ...
	I1030 11:39:45.307255   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1ffc1f1881"
	I1030 11:39:45.321943   14108 logs.go:123] Gathering logs for kube-controller-manager [74c76d98b1d5] ...
	I1030 11:39:45.321955   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c76d98b1d5"
	I1030 11:39:45.337645   14108 logs.go:123] Gathering logs for storage-provisioner [130b7234929f] ...
	I1030 11:39:45.337657   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130b7234929f"
	I1030 11:39:45.349108   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:39:45.349117   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:39:45.371795   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:39:45.371813   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:39:45.406703   14108 logs.go:123] Gathering logs for etcd [e395a4682cc5] ...
	I1030 11:39:45.406717   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e395a4682cc5"
	I1030 11:39:45.425174   14108 logs.go:123] Gathering logs for kube-scheduler [7488ffe8a526] ...
	I1030 11:39:45.425185   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7488ffe8a526"
	I1030 11:39:45.436766   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:39:45.436776   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:39:45.448967   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:39:45.448980   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:39:45.488067   14108 logs.go:123] Gathering logs for kube-apiserver [9e4f9a6580ee] ...
	I1030 11:39:45.488084   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e4f9a6580ee"
	I1030 11:39:45.515091   14108 logs.go:123] Gathering logs for etcd [d6a9e90789a1] ...
	I1030 11:39:45.515103   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6a9e90789a1"
	I1030 11:39:45.529309   14108 logs.go:123] Gathering logs for coredns [8b8572d0090f] ...
	I1030 11:39:45.529321   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b8572d0090f"
	I1030 11:39:45.540179   14108 logs.go:123] Gathering logs for storage-provisioner [82d3dbe6a441] ...
	I1030 11:39:45.540189   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82d3dbe6a441"
	I1030 11:39:45.551652   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:39:45.551662   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:39:48.058413   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:39:53.061209   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:39:53.061421   14108 kubeadm.go:597] duration metric: took 4m4.054518875s to restartPrimaryControlPlane
	W1030 11:39:53.061604   14108 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1030 11:39:53.061666   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1030 11:39:54.150033   14108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.088362875s)
	I1030 11:39:54.150105   14108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 11:39:54.155112   14108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 11:39:54.158031   14108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 11:39:54.160566   14108 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 11:39:54.160572   14108 kubeadm.go:157] found existing configuration files:
	
	I1030 11:39:54.160601   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf
	I1030 11:39:54.163171   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1030 11:39:54.163202   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1030 11:39:54.166102   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf
	I1030 11:39:54.168681   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1030 11:39:54.168714   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1030 11:39:54.171519   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf
	I1030 11:39:54.174414   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1030 11:39:54.174446   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1030 11:39:54.177067   14108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf
	I1030 11:39:54.179622   14108 kubeadm.go:163] "https://control-plane.minikube.internal:57416" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57416 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1030 11:39:54.179649   14108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1030 11:39:54.182609   14108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 11:39:54.201462   14108 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1030 11:39:54.201489   14108 kubeadm.go:310] [preflight] Running pre-flight checks
	I1030 11:39:54.251999   14108 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 11:39:54.252059   14108 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 11:39:54.252116   14108 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 11:39:54.299925   14108 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 11:39:54.307035   14108 out.go:235]   - Generating certificates and keys ...
	I1030 11:39:54.307071   14108 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1030 11:39:54.307109   14108 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1030 11:39:54.307151   14108 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 11:39:54.307184   14108 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1030 11:39:54.307229   14108 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1030 11:39:54.307275   14108 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1030 11:39:54.307311   14108 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1030 11:39:54.307350   14108 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1030 11:39:54.307398   14108 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 11:39:54.307437   14108 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 11:39:54.307459   14108 kubeadm.go:310] [certs] Using the existing "sa" key
	I1030 11:39:54.307493   14108 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 11:39:54.420229   14108 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 11:39:54.590501   14108 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 11:39:54.806069   14108 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 11:39:54.878698   14108 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 11:39:54.910140   14108 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 11:39:54.910542   14108 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 11:39:54.910580   14108 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1030 11:39:54.997020   14108 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 11:39:55.005066   14108 out.go:235]   - Booting up control plane ...
	I1030 11:39:55.005122   14108 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 11:39:55.005167   14108 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 11:39:55.005201   14108 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 11:39:55.005251   14108 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 11:39:55.005341   14108 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 11:39:59.500632   14108 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501451 seconds
	I1030 11:39:59.500700   14108 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 11:39:59.504376   14108 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 11:40:00.013316   14108 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 11:40:00.013425   14108 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-877000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 11:40:00.517468   14108 kubeadm.go:310] [bootstrap-token] Using token: ur8hob.p0wcdfscjzgvm8wl
	I1030 11:40:00.521664   14108 out.go:235]   - Configuring RBAC rules ...
	I1030 11:40:00.521734   14108 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 11:40:00.521797   14108 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 11:40:00.528628   14108 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 11:40:00.529758   14108 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 11:40:00.530727   14108 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 11:40:00.531730   14108 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 11:40:00.535364   14108 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 11:40:00.703209   14108 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1030 11:40:00.921876   14108 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1030 11:40:00.922329   14108 kubeadm.go:310] 
	I1030 11:40:00.922359   14108 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1030 11:40:00.922363   14108 kubeadm.go:310] 
	I1030 11:40:00.922398   14108 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1030 11:40:00.922401   14108 kubeadm.go:310] 
	I1030 11:40:00.922419   14108 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1030 11:40:00.922474   14108 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 11:40:00.922500   14108 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 11:40:00.922505   14108 kubeadm.go:310] 
	I1030 11:40:00.922537   14108 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1030 11:40:00.922543   14108 kubeadm.go:310] 
	I1030 11:40:00.922573   14108 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 11:40:00.922576   14108 kubeadm.go:310] 
	I1030 11:40:00.922604   14108 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1030 11:40:00.922648   14108 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 11:40:00.922693   14108 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 11:40:00.922696   14108 kubeadm.go:310] 
	I1030 11:40:00.922742   14108 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 11:40:00.922784   14108 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1030 11:40:00.922788   14108 kubeadm.go:310] 
	I1030 11:40:00.922833   14108 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ur8hob.p0wcdfscjzgvm8wl \
	I1030 11:40:00.922889   14108 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 \
	I1030 11:40:00.922912   14108 kubeadm.go:310] 	--control-plane 
	I1030 11:40:00.922915   14108 kubeadm.go:310] 
	I1030 11:40:00.922962   14108 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1030 11:40:00.922966   14108 kubeadm.go:310] 
	I1030 11:40:00.923010   14108 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ur8hob.p0wcdfscjzgvm8wl \
	I1030 11:40:00.923066   14108 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7be18db78d143f7f1b3db8c007a27a4a1aa468667e082743ca73b9d1ecdf0184 
	I1030 11:40:00.923189   14108 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 11:40:00.923261   14108 cni.go:84] Creating CNI manager for ""
	I1030 11:40:00.923269   14108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:40:00.930777   14108 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 11:40:00.933832   14108 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 11:40:00.936658   14108 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1030 11:40:00.941375   14108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 11:40:00.941427   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 11:40:00.941441   14108 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-877000 minikube.k8s.io/updated_at=2024_10_30T11_40_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605ec3dacea4f25ee181ccc490c641c149fd11d0 minikube.k8s.io/name=stopped-upgrade-877000 minikube.k8s.io/primary=true
	I1030 11:40:00.985837   14108 ops.go:34] apiserver oom_adj: -16
	I1030 11:40:00.985834   14108 kubeadm.go:1113] duration metric: took 44.451042ms to wait for elevateKubeSystemPrivileges
	I1030 11:40:00.991338   14108 kubeadm.go:394] duration metric: took 4m12.001835042s to StartCluster
	I1030 11:40:00.991354   14108 settings.go:142] acquiring lock: {Name:mk1cee1df7de5eaabbeab12792d956523e6c9184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:00.991449   14108 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:00.991745   14108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/kubeconfig: {Name:mkea525c0c25887bd8d562c8182eb3da015af133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:00.991936   14108 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:00.991960   14108 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1030 11:40:00.992032   14108 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-877000"
	I1030 11:40:00.992041   14108 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-877000"
	W1030 11:40:00.992044   14108 addons.go:243] addon storage-provisioner should already be in state true
	I1030 11:40:00.992057   14108 host.go:66] Checking if "stopped-upgrade-877000" exists ...
	I1030 11:40:00.992055   14108 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-877000"
	I1030 11:40:00.992073   14108 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-877000"
	I1030 11:40:00.992073   14108 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:00.992306   14108 retry.go:31] will retry after 1.462904595s: connect: dial unix /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/monitor: connect: connection refused
	I1030 11:40:00.992980   14108 kapi.go:59] client config for stopped-upgrade-877000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/stopped-upgrade-877000/client.key", CAFile:"/Users/jenkins/minikube-integration/19883-11536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10245e7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 11:40:00.993105   14108 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-877000"
	W1030 11:40:00.993112   14108 addons.go:243] addon default-storageclass should already be in state true
	I1030 11:40:00.993119   14108 host.go:66] Checking if "stopped-upgrade-877000" exists ...
	I1030 11:40:00.993639   14108 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 11:40:00.993644   14108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 11:40:00.993650   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:40:00.995802   14108 out.go:177] * Verifying Kubernetes components...
	I1030 11:40:01.002791   14108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 11:40:01.093226   14108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1030 11:40:01.097967   14108 api_server.go:52] waiting for apiserver process to appear ...
	I1030 11:40:01.098020   14108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 11:40:01.101768   14108 api_server.go:72] duration metric: took 109.823834ms to wait for apiserver process to appear ...
	I1030 11:40:01.101776   14108 api_server.go:88] waiting for apiserver healthz status ...
	I1030 11:40:01.101783   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:01.120571   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 11:40:01.487529   14108 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1030 11:40:01.487540   14108 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1030 11:40:02.464203   14108 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 11:40:02.468365   14108 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:40:02.468401   14108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 11:40:02.468440   14108 sshutil.go:53] new ssh client: &{IP:localhost Port:57382 SSHKeyPath:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/stopped-upgrade-877000/id_rsa Username:docker}
	I1030 11:40:02.531849   14108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 11:40:06.103953   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:06.104064   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:11.105363   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:11.105392   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:16.106234   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:16.106251   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:21.107185   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:21.107274   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:26.108868   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:26.108942   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:31.110766   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:31.110818   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1030 11:40:31.489707   14108 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1030 11:40:31.497077   14108 out.go:177] * Enabled addons: storage-provisioner
	I1030 11:40:31.510048   14108 addons.go:510] duration metric: took 30.518427083s for enable addons: enabled=[storage-provisioner]
	I1030 11:40:36.113188   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:36.113288   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:41.115956   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:41.115994   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:46.118230   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:46.118369   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:51.121037   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:51.121085   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:40:56.123573   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:40:56.123674   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:01.126213   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:01.126442   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:01.144148   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:01.144241   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:01.157509   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:01.157586   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:01.168818   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:01.168896   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:01.183315   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:01.183395   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:01.206850   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:01.206919   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:01.217042   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:01.217125   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:01.228902   14108 logs.go:282] 0 containers: []
	W1030 11:41:01.228913   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:01.228978   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:01.239613   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:01.239628   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:01.239634   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:01.253420   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:01.253431   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:01.268190   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:01.268203   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:01.280056   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:01.280069   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:01.305177   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:01.305184   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:01.343095   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:01.343103   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:01.347099   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:01.347107   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:01.360744   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:01.360755   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:01.379473   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:01.379487   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:01.390777   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:01.390791   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:01.402409   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:01.402422   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:01.437661   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:01.437675   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:01.449556   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:01.449567   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:03.961222   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:08.963387   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:08.963672   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:08.990038   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:08.990168   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:09.007209   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:09.007292   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:09.020668   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:09.020751   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:09.031955   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:09.032022   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:09.042460   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:09.042543   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:09.052599   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:09.052665   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:09.062793   14108 logs.go:282] 0 containers: []
	W1030 11:41:09.062807   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:09.062876   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:09.072823   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:09.072839   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:09.072845   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:09.091060   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:09.091070   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:09.102691   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:09.102701   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:09.127198   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:09.127206   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:09.131379   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:09.131390   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:09.145796   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:09.145810   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:09.163558   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:09.163567   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:09.175243   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:09.175254   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:09.186564   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:09.186575   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:09.225282   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:09.225291   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:09.261637   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:09.261649   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:09.273484   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:09.273496   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:09.288303   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:09.288316   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:11.802999   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:16.805839   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:16.805919   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:16.817395   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:16.817464   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:16.828307   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:16.828371   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:16.840111   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:16.840188   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:16.852598   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:16.852678   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:16.863738   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:16.863793   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:16.874173   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:16.874243   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:16.885247   14108 logs.go:282] 0 containers: []
	W1030 11:41:16.885261   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:16.885318   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:16.897004   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:16.897019   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:16.897025   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:16.903223   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:16.903234   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:16.923073   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:16.923082   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:16.936126   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:16.936137   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:16.962315   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:16.962327   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:16.974267   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:16.974279   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:16.987519   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:16.987531   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:17.006165   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:17.006179   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:17.047385   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:17.047401   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:17.090075   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:17.090090   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:17.105611   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:17.105624   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:17.119609   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:17.119623   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:17.132490   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:17.132502   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:19.647613   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:24.650459   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:24.651064   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:24.690106   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:24.690258   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:24.724642   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:24.724733   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:24.737715   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:24.737800   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:24.748802   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:24.748888   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:24.759356   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:24.759442   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:24.769532   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:24.769610   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:24.779472   14108 logs.go:282] 0 containers: []
	W1030 11:41:24.779482   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:24.779541   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:24.792918   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:24.792934   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:24.792939   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:24.811876   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:24.811887   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:24.823243   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:24.823258   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:24.857475   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:24.857488   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:24.874284   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:24.874297   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:24.893698   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:24.893712   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:24.918784   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:24.918796   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:24.930714   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:24.930728   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:24.946186   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:24.946199   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:24.959836   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:24.959849   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:24.972108   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:24.972123   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:25.008917   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:25.008925   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:25.012900   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:25.012909   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:27.539080   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:32.541976   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:32.542472   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:32.576845   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:32.576987   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:32.602619   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:32.602720   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:32.615980   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:32.616062   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:32.627993   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:32.628068   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:32.638516   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:32.638607   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:32.648958   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:32.649043   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:32.659471   14108 logs.go:282] 0 containers: []
	W1030 11:41:32.659484   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:32.659551   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:32.674354   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:32.674368   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:32.674373   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:32.686754   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:32.686766   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:32.699245   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:32.699258   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:32.714705   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:32.714718   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:32.726507   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:32.726519   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:32.744838   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:32.744850   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:32.756420   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:32.756434   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:32.779652   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:32.779659   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:32.817896   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:32.817909   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:32.823087   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:32.823096   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:32.838014   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:32.838025   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:32.851824   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:32.851835   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:32.862996   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:32.863005   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:35.402052   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:40.405008   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:40.405569   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:40.443909   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:40.444067   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:40.466039   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:40.466155   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:40.481498   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:40.481582   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:40.502369   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:40.502462   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:40.513307   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:40.513385   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:40.524852   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:40.524942   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:40.536656   14108 logs.go:282] 0 containers: []
	W1030 11:41:40.536667   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:40.536753   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:40.548493   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:40.548509   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:40.548516   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:40.561061   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:40.561072   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:40.580134   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:40.580152   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:40.593045   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:40.593059   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:40.617979   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:40.617997   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:40.657010   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:40.657029   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:40.676349   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:40.676364   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:40.693714   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:40.693728   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:40.706529   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:40.706543   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:40.719950   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:40.719963   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:40.724239   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:40.724247   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:40.760640   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:40.760654   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:40.779031   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:40.779047   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:43.294779   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:48.297249   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:48.297793   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:48.335420   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:48.335565   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:48.356002   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:48.356110   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:48.374783   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:48.374862   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:48.387057   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:48.387134   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:48.397934   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:48.398012   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:48.412576   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:48.412649   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:48.423287   14108 logs.go:282] 0 containers: []
	W1030 11:41:48.423299   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:48.423368   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:48.434348   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:48.434366   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:48.434371   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:48.446173   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:48.446183   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:48.450799   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:48.450807   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:48.486289   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:48.486300   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:48.505867   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:48.505880   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:48.525069   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:48.525079   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:48.536641   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:48.536654   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:48.563908   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:48.563919   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:48.588258   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:48.588270   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:48.599766   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:48.599777   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:48.638149   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:48.638160   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:48.652731   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:48.652740   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:48.667324   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:48.667337   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:51.193486   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:41:56.196135   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:41:56.196683   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:41:56.231196   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:41:56.231344   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:41:56.253598   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:41:56.253707   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:41:56.268055   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:41:56.268135   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:41:56.279993   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:41:56.280069   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:41:56.291422   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:41:56.291501   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:41:56.302454   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:41:56.302534   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:41:56.312773   14108 logs.go:282] 0 containers: []
	W1030 11:41:56.312787   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:41:56.312850   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:41:56.323352   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:41:56.323366   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:41:56.323372   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:41:56.337373   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:41:56.337385   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:41:56.351064   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:41:56.351075   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:41:56.366670   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:41:56.366682   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:41:56.378342   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:41:56.378356   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:41:56.415236   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:41:56.415244   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:41:56.419514   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:41:56.419519   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:41:56.453724   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:41:56.453736   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:41:56.471894   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:41:56.471905   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:41:56.493620   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:41:56.493631   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:41:56.517596   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:41:56.517603   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:41:56.529370   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:41:56.529381   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:41:56.541089   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:41:56.541103   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:41:59.054875   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:04.057247   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:04.057426   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:04.070795   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:04.070878   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:04.085962   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:04.086041   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:04.096359   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:42:04.096422   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:04.106799   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:04.106881   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:04.116860   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:04.116936   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:04.127433   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:04.127498   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:04.137478   14108 logs.go:282] 0 containers: []
	W1030 11:42:04.137489   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:04.137558   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:04.148312   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:04.148327   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:04.148333   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:04.160055   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:04.160067   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:04.176902   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:04.176913   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:04.190829   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:04.190839   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:04.195275   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:04.195282   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:04.229814   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:04.229826   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:04.250837   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:04.250848   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:04.262687   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:04.262700   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:04.273786   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:04.273799   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:04.288734   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:04.288746   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:04.299928   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:04.299940   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:04.336043   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:04.336053   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:04.347054   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:04.347064   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:06.872108   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:11.874495   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:11.875052   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:11.916178   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:11.916335   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:11.963656   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:11.963751   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:11.982798   14108 logs.go:282] 2 containers: [25858ca8d3a1 f684d59bb266]
	I1030 11:42:11.982875   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:12.010113   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:12.010203   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:12.029831   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:12.029912   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:12.050241   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:12.050328   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:12.065131   14108 logs.go:282] 0 containers: []
	W1030 11:42:12.065144   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:12.065216   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:12.092368   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:12.092386   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:12.092392   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:12.130785   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:12.130800   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:12.170537   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:12.170550   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:12.184706   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:12.184719   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:12.196859   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:12.196872   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:12.208257   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:12.208270   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:12.226841   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:12.226852   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:12.252138   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:12.252145   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:12.263365   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:12.263376   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:12.267679   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:12.267687   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:12.282301   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:12.282313   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:12.294063   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:12.294078   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:12.306189   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:12.306201   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:14.822410   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:19.824638   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:19.825170   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:19.861812   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:19.861962   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:19.883072   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:19.883172   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:19.898033   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:19.898122   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:19.910151   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:19.910225   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:19.921160   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:19.921241   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:19.932364   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:19.932438   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:19.942647   14108 logs.go:282] 0 containers: []
	W1030 11:42:19.942661   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:19.942728   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:19.954894   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:19.954910   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:19.954917   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:19.973328   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:19.973338   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:19.998953   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:19.998960   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:20.014695   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:20.014706   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:20.026716   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:20.026729   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:20.031349   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:20.031357   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:20.045017   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:20.045030   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:20.058932   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:20.058943   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:20.070158   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:20.070170   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:20.082336   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:20.082347   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:20.119751   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:20.119763   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:20.134068   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:20.134081   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:20.146592   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:20.146604   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:20.158347   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:20.158359   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:20.192892   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:20.192903   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:22.707141   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:27.709523   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:27.710033   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:27.749109   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:27.749241   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:27.771033   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:27.771160   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:27.786870   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:27.786950   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:27.801955   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:27.802025   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:27.813278   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:27.813357   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:27.823950   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:27.824026   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:27.833543   14108 logs.go:282] 0 containers: []
	W1030 11:42:27.833554   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:27.833618   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:27.843979   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:27.843998   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:27.844004   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:27.855496   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:27.855508   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:27.889747   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:27.889758   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:27.902061   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:27.902073   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:27.919527   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:27.919538   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:27.944758   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:27.944771   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:27.960522   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:27.960532   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:27.971745   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:27.971757   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:27.983224   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:27.983232   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:27.987628   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:27.987635   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:28.004975   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:28.004984   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:28.016029   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:28.016040   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:28.028621   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:28.028633   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:28.039904   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:28.039918   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:28.076575   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:28.076583   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:30.592422   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:35.595235   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:35.595714   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:35.636203   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:35.636339   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:35.659215   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:35.659346   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:35.674648   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:35.674734   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:35.687626   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:35.687701   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:35.700600   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:35.700676   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:35.711701   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:35.711788   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:35.722004   14108 logs.go:282] 0 containers: []
	W1030 11:42:35.722017   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:35.722076   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:35.733242   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:35.733258   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:35.733263   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:35.744836   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:35.744849   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:35.762171   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:35.762181   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:35.803486   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:35.803497   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:35.820499   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:35.820512   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:35.832066   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:35.832080   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:35.855552   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:35.855561   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:35.866668   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:35.866681   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:35.870976   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:35.870987   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:35.888788   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:35.888799   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:35.900168   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:35.900180   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:35.924627   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:35.924638   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:35.939107   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:35.939119   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:35.976293   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:35.976302   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:35.993740   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:35.993754   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:38.507024   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:43.509517   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:43.510078   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:43.555482   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:43.555625   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:43.578363   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:43.578459   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:43.594568   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:43.594653   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:43.605899   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:43.605973   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:43.616161   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:43.616220   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:43.627692   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:43.627767   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:43.638629   14108 logs.go:282] 0 containers: []
	W1030 11:42:43.638642   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:43.638707   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:43.651971   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:43.651987   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:43.651992   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:43.663898   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:43.663908   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:43.679291   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:43.679301   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:43.693699   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:43.693711   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:43.705071   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:43.705082   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:43.716522   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:43.716533   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:43.731698   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:43.731708   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:43.768193   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:43.768201   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:43.790783   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:43.790793   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:43.806964   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:43.806974   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:43.818850   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:43.818861   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:43.836348   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:43.836359   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:43.861743   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:43.861755   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:43.866025   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:43.866033   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:43.878051   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:43.878063   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:46.420057   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:51.422660   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:51.422890   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:51.439901   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:51.439987   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:51.450524   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:51.450604   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:51.461039   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:51.461120   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:51.471504   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:51.471582   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:51.482801   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:51.482874   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:51.497316   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:51.497395   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:51.507094   14108 logs.go:282] 0 containers: []
	W1030 11:42:51.507111   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:51.507177   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:51.517956   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:51.517974   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:51.517980   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:51.532458   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:51.532470   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:51.543733   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:51.543746   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:51.555259   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:51.555273   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:42:51.567390   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:51.567404   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:51.578745   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:51.578757   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:51.583410   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:51.583417   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:51.617910   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:51.617925   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:51.629873   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:51.629883   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:51.641319   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:51.641330   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:51.659326   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:51.659340   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:51.684848   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:51.684857   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:51.722394   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:51.722401   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:51.735967   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:51.735977   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:51.747346   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:51.747360   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:54.264248   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:42:59.265801   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:42:59.265930   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:42:59.279136   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:42:59.279217   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:42:59.289778   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:42:59.289849   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:42:59.300556   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:42:59.300639   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:42:59.311198   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:42:59.311265   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:42:59.321991   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:42:59.322067   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:42:59.332435   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:42:59.332512   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:42:59.342378   14108 logs.go:282] 0 containers: []
	W1030 11:42:59.342394   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:42:59.342461   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:42:59.353046   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:42:59.353063   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:42:59.353069   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:42:59.357380   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:42:59.357389   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:42:59.391131   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:42:59.391144   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:42:59.403003   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:42:59.403016   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:42:59.428465   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:42:59.428473   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:42:59.442199   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:42:59.442208   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:42:59.457499   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:42:59.457509   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:42:59.474792   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:42:59.474803   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:42:59.487393   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:42:59.487405   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:42:59.526398   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:42:59.526409   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:42:59.540687   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:42:59.540699   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:42:59.557026   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:42:59.557039   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:42:59.576876   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:42:59.576888   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:42:59.587795   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:42:59.587809   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:42:59.599048   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:42:59.599058   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:02.112400   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:07.113358   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:07.113428   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:07.125246   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:07.125339   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:07.137272   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:07.137348   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:07.149664   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:07.149735   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:07.160853   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:07.160917   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:07.171952   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:07.172038   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:07.183959   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:07.184047   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:07.194505   14108 logs.go:282] 0 containers: []
	W1030 11:43:07.194515   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:07.194578   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:07.205871   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:07.205887   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:07.205892   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:07.220810   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:07.220819   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:07.233048   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:07.233058   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:07.245367   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:07.245376   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:07.279807   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:07.279819   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:07.292904   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:07.292915   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:07.329658   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:07.329668   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:07.341525   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:07.341537   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:07.359241   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:07.359252   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:07.370974   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:07.370985   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:07.385612   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:07.385622   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:07.399899   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:07.399909   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:07.414948   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:07.414958   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:07.426656   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:07.426666   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:07.450793   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:07.450800   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:09.957396   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:14.959988   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:14.960225   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:14.980483   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:14.980599   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:14.995059   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:14.995137   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:15.007841   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:15.007918   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:15.018736   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:15.018807   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:15.029238   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:15.029310   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:15.039819   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:15.039898   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:15.049621   14108 logs.go:282] 0 containers: []
	W1030 11:43:15.049631   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:15.049685   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:15.060000   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:15.060018   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:15.060024   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:15.071636   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:15.071646   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:15.095259   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:15.095270   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:15.106679   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:15.106688   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:15.122061   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:15.122073   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:15.136958   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:15.136968   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:15.148953   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:15.148968   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:15.153390   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:15.153400   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:15.165448   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:15.165461   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:15.176631   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:15.176644   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:15.188522   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:15.188535   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:15.206612   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:15.206625   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:15.221026   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:15.221038   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:15.256421   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:15.256436   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:15.273347   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:15.273356   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:17.813167   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:22.814148   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:22.814240   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:22.826557   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:22.826614   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:22.837331   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:22.837398   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:22.847984   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:22.848058   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:22.859813   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:22.859882   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:22.871376   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:22.871450   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:22.883699   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:22.883769   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:22.895613   14108 logs.go:282] 0 containers: []
	W1030 11:43:22.895624   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:22.895683   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:22.906473   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:22.906492   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:22.906499   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:22.943839   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:22.943851   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:22.948325   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:22.948335   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:22.963046   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:22.963058   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:22.981555   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:22.981563   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:23.016929   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:23.016943   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:23.028523   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:23.028537   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:23.040871   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:23.040884   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:23.052830   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:23.052843   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:23.064195   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:23.064205   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:23.087752   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:23.087762   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:23.112737   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:23.112743   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:23.126638   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:23.126649   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:23.140694   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:23.140704   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:23.152108   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:23.152119   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:25.675980   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:30.676871   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:30.677469   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:30.719172   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:30.719317   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:30.743362   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:30.743493   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:30.760537   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:30.760632   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:30.773423   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:30.773494   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:30.785265   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:30.785349   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:30.797376   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:30.797468   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:30.816165   14108 logs.go:282] 0 containers: []
	W1030 11:43:30.816175   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:30.816242   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:30.829420   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:30.829438   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:30.829444   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:30.840776   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:30.840789   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:30.861003   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:30.861013   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:30.885255   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:30.885261   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:30.922707   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:30.922713   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:30.927057   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:30.927065   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:30.938750   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:30.938761   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:30.950389   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:30.950398   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:30.962604   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:30.962615   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:30.999909   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:30.999921   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:31.014205   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:31.014218   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:31.026062   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:31.026072   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:31.037771   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:31.037783   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:31.053201   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:31.053211   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:31.065042   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:31.065054   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:33.584702   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:38.587502   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:38.588087   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:38.629029   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:38.629184   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:38.651836   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:38.651972   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:38.668109   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:38.668197   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:38.680647   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:38.680719   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:38.691549   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:38.691639   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:38.702607   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:38.702679   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:38.713385   14108 logs.go:282] 0 containers: []
	W1030 11:43:38.713399   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:38.713469   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:38.724492   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:38.724510   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:38.724515   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:38.740741   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:38.740754   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:38.752824   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:38.752837   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:38.765011   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:38.765020   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:38.769220   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:38.769227   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:38.789628   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:38.789639   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:38.801456   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:38.801480   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:38.838622   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:38.838632   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:38.853672   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:38.853685   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:38.883497   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:38.883510   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:38.901916   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:38.901926   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:38.913583   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:38.913597   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:38.927708   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:38.927720   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:38.939142   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:38.939153   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:38.963958   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:38.963966   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:41.500547   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:46.503237   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:46.503797   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:46.541277   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:46.541429   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:46.562516   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:46.562639   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:46.576973   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:46.577065   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:46.589893   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:46.589973   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:46.600939   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:46.601019   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:46.611317   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:46.611394   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:46.622356   14108 logs.go:282] 0 containers: []
	W1030 11:43:46.622366   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:46.622428   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:46.633003   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:46.633020   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:46.633026   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:46.650132   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:46.650144   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:46.665124   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:46.665132   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:46.676782   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:46.676794   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:46.680866   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:46.680875   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:46.716283   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:46.716292   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:46.730709   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:46.730722   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:46.742398   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:46.742411   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:46.754515   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:46.754530   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:46.767284   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:46.767297   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:46.778608   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:46.778621   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:46.818553   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:46.818568   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:46.833330   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:46.833340   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:46.845400   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:46.845413   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:46.865163   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:46.865172   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:49.389784   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:43:54.392681   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:43:54.393202   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1030 11:43:54.428490   14108 logs.go:282] 1 containers: [521ba8592369]
	I1030 11:43:54.428634   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1030 11:43:54.448107   14108 logs.go:282] 1 containers: [3373bcef39ee]
	I1030 11:43:54.448216   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1030 11:43:54.463054   14108 logs.go:282] 4 containers: [a3252115dd7b 4805952979a7 25858ca8d3a1 f684d59bb266]
	I1030 11:43:54.463144   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1030 11:43:54.474819   14108 logs.go:282] 1 containers: [c1426b236b71]
	I1030 11:43:54.474902   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1030 11:43:54.485961   14108 logs.go:282] 1 containers: [0d10b0d78770]
	I1030 11:43:54.486045   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1030 11:43:54.497216   14108 logs.go:282] 1 containers: [39956b86d7ab]
	I1030 11:43:54.497295   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1030 11:43:54.507514   14108 logs.go:282] 0 containers: []
	W1030 11:43:54.507527   14108 logs.go:284] No container was found matching "kindnet"
	I1030 11:43:54.507591   14108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1030 11:43:54.518259   14108 logs.go:282] 1 containers: [655e2614134a]
	I1030 11:43:54.518275   14108 logs.go:123] Gathering logs for etcd [3373bcef39ee] ...
	I1030 11:43:54.518280   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373bcef39ee"
	I1030 11:43:54.532277   14108 logs.go:123] Gathering logs for coredns [a3252115dd7b] ...
	I1030 11:43:54.532289   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3252115dd7b"
	I1030 11:43:54.544545   14108 logs.go:123] Gathering logs for kube-proxy [0d10b0d78770] ...
	I1030 11:43:54.544554   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d10b0d78770"
	I1030 11:43:54.556515   14108 logs.go:123] Gathering logs for kube-controller-manager [39956b86d7ab] ...
	I1030 11:43:54.556528   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39956b86d7ab"
	I1030 11:43:54.578145   14108 logs.go:123] Gathering logs for dmesg ...
	I1030 11:43:54.578157   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1030 11:43:54.582371   14108 logs.go:123] Gathering logs for kube-scheduler [c1426b236b71] ...
	I1030 11:43:54.582380   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1426b236b71"
	I1030 11:43:54.598181   14108 logs.go:123] Gathering logs for Docker ...
	I1030 11:43:54.598192   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1030 11:43:54.622796   14108 logs.go:123] Gathering logs for coredns [f684d59bb266] ...
	I1030 11:43:54.622803   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f684d59bb266"
	I1030 11:43:54.635468   14108 logs.go:123] Gathering logs for storage-provisioner [655e2614134a] ...
	I1030 11:43:54.635480   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 655e2614134a"
	I1030 11:43:54.647494   14108 logs.go:123] Gathering logs for container status ...
	I1030 11:43:54.647509   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1030 11:43:54.660070   14108 logs.go:123] Gathering logs for kubelet ...
	I1030 11:43:54.660081   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1030 11:43:54.699615   14108 logs.go:123] Gathering logs for describe nodes ...
	I1030 11:43:54.699622   14108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1030 11:43:54.734653   14108 logs.go:123] Gathering logs for kube-apiserver [521ba8592369] ...
	I1030 11:43:54.734665   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521ba8592369"
	I1030 11:43:54.750019   14108 logs.go:123] Gathering logs for coredns [4805952979a7] ...
	I1030 11:43:54.750030   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4805952979a7"
	I1030 11:43:54.762012   14108 logs.go:123] Gathering logs for coredns [25858ca8d3a1] ...
	I1030 11:43:54.762023   14108 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25858ca8d3a1"
	I1030 11:43:57.280854   14108 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1030 11:44:02.283573   14108 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1030 11:44:02.286976   14108 out.go:201] 
	W1030 11:44:02.290913   14108 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1030 11:44:02.290921   14108 out.go:270] * 
	* 
	W1030 11:44:02.291601   14108 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:44:02.306939   14108 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-877000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (690.98s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-827000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-827000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.938103791s)

                                                
                                                
-- stdout --
	* [pause-827000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-827000" primary control-plane node in "pause-827000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-827000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-827000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-827000 -n pause-827000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-827000 -n pause-827000: exit status 7 (63.864084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-827000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 : exit status 80 (9.909597667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-443000" primary control-plane node in "NoKubernetes-443000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-443000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-443000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000: exit status 7 (72.302875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-443000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 : exit status 80 (5.263278s)

                                                
                                                
-- stdout --
	* [NoKubernetes-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-443000
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-443000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000: exit status 7 (59.979583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-443000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 : exit status 80 (5.26224675s)

                                                
                                                
-- stdout --
	* [NoKubernetes-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-443000
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-443000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000: exit status 7 (53.320625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-443000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 : exit status 80 (5.303926875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-443000
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-443000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-443000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-443000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-443000 -n NoKubernetes-443000: exit status 7 (72.492958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-443000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.695554459s)

                                                
                                                
-- stdout --
	* [auto-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-286000" primary control-plane node in "auto-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:40:04.530360   14304 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:40:04.530514   14304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:04.530517   14304 out.go:358] Setting ErrFile to fd 2...
	I1030 11:40:04.530520   14304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:04.530678   14304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:40:04.532009   14304 out.go:352] Setting JSON to false
	I1030 11:40:04.550332   14304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7775,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:40:04.550428   14304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:40:04.557744   14304 out.go:177] * [auto-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:40:04.566610   14304 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:40:04.566697   14304 notify.go:220] Checking for updates...
	I1030 11:40:04.573577   14304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:04.576499   14304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:40:04.580586   14304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:40:04.583626   14304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:40:04.586606   14304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:40:04.590016   14304 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:40:04.590088   14304 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:04.590138   14304 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:40:04.594559   14304 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:40:04.601579   14304 start.go:297] selected driver: qemu2
	I1030 11:40:04.601585   14304 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:40:04.601592   14304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:40:04.604263   14304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:40:04.608662   14304 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:40:04.611631   14304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:40:04.611648   14304 cni.go:84] Creating CNI manager for ""
	I1030 11:40:04.611672   14304 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:40:04.611677   14304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:40:04.611706   14304 start.go:340] cluster config:
	{Name:auto-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:40:04.616492   14304 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:40:04.623524   14304 out.go:177] * Starting "auto-286000" primary control-plane node in "auto-286000" cluster
	I1030 11:40:04.627558   14304 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:40:04.627573   14304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:40:04.627582   14304 cache.go:56] Caching tarball of preloaded images
	I1030 11:40:04.627659   14304 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:40:04.627665   14304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:40:04.627722   14304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/auto-286000/config.json ...
	I1030 11:40:04.627732   14304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/auto-286000/config.json: {Name:mkf89326659e31318c15a73d16a21d26bcc60d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:04.628030   14304 start.go:360] acquireMachinesLock for auto-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:04.628079   14304 start.go:364] duration metric: took 42.459µs to acquireMachinesLock for "auto-286000"
	I1030 11:40:04.628091   14304 start.go:93] Provisioning new machine with config: &{Name:auto-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:04.628132   14304 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:04.632568   14304 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:04.648309   14304 start.go:159] libmachine.API.Create for "auto-286000" (driver="qemu2")
	I1030 11:40:04.648340   14304 client.go:168] LocalClient.Create starting
	I1030 11:40:04.648411   14304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:04.648453   14304 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:04.648468   14304 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:04.648514   14304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:04.648545   14304 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:04.648556   14304 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:04.648920   14304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:04.814710   14304 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:04.849705   14304 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:04.849712   14304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:04.849914   14304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:04.859789   14304 main.go:141] libmachine: STDOUT: 
	I1030 11:40:04.859811   14304 main.go:141] libmachine: STDERR: 
	I1030 11:40:04.859874   14304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2 +20000M
	I1030 11:40:04.868377   14304 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:04.868391   14304 main.go:141] libmachine: STDERR: 
	I1030 11:40:04.868407   14304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:04.868414   14304 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:04.868427   14304 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:04.868465   14304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:66:8b:73:56:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:04.870360   14304 main.go:141] libmachine: STDOUT: 
	I1030 11:40:04.870375   14304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:04.870396   14304 client.go:171] duration metric: took 222.052083ms to LocalClient.Create
	I1030 11:40:06.872569   14304 start.go:128] duration metric: took 2.244435375s to createHost
	I1030 11:40:06.872637   14304 start.go:83] releasing machines lock for "auto-286000", held for 2.244576583s
	W1030 11:40:06.872693   14304 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:06.882584   14304 out.go:177] * Deleting "auto-286000" in qemu2 ...
	W1030 11:40:06.908228   14304 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:06.908255   14304 start.go:729] Will try again in 5 seconds ...
	I1030 11:40:11.909477   14304 start.go:360] acquireMachinesLock for auto-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:11.909679   14304 start.go:364] duration metric: took 181.666µs to acquireMachinesLock for "auto-286000"
	I1030 11:40:11.909699   14304 start.go:93] Provisioning new machine with config: &{Name:auto-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:11.909749   14304 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:11.915020   14304 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:11.930407   14304 start.go:159] libmachine.API.Create for "auto-286000" (driver="qemu2")
	I1030 11:40:11.930443   14304 client.go:168] LocalClient.Create starting
	I1030 11:40:11.930521   14304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:11.930561   14304 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:11.930572   14304 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:11.930612   14304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:11.930645   14304 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:11.930652   14304 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:11.930960   14304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:12.093635   14304 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:12.132458   14304 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:12.132469   14304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:12.132667   14304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:12.142819   14304 main.go:141] libmachine: STDOUT: 
	I1030 11:40:12.142836   14304 main.go:141] libmachine: STDERR: 
	I1030 11:40:12.142904   14304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2 +20000M
	I1030 11:40:12.152456   14304 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:12.152480   14304 main.go:141] libmachine: STDERR: 
	I1030 11:40:12.152495   14304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:12.152504   14304 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:12.152514   14304 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:12.152559   14304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:3d:4f:4a:52:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/auto-286000/disk.qcow2
	I1030 11:40:12.154926   14304 main.go:141] libmachine: STDOUT: 
	I1030 11:40:12.154942   14304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:12.154956   14304 client.go:171] duration metric: took 224.512ms to LocalClient.Create
	I1030 11:40:14.157017   14304 start.go:128] duration metric: took 2.247283s to createHost
	I1030 11:40:14.157048   14304 start.go:83] releasing machines lock for "auto-286000", held for 2.247387625s
	W1030 11:40:14.157186   14304 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:14.166567   14304 out.go:201] 
	W1030 11:40:14.172647   14304 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:40:14.172652   14304 out.go:270] * 
	* 
	W1030 11:40:14.173216   14304 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:40:14.181621   14304 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.000891333s)

                                                
                                                
-- stdout --
	* [kindnet-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-286000" primary control-plane node in "kindnet-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:40:16.676841   14426 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:40:16.677008   14426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:16.677011   14426 out.go:358] Setting ErrFile to fd 2...
	I1030 11:40:16.677014   14426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:16.677147   14426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:40:16.678384   14426 out.go:352] Setting JSON to false
	I1030 11:40:16.696127   14426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7787,"bootTime":1730305829,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:40:16.696210   14426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:40:16.701746   14426 out.go:177] * [kindnet-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:40:16.709778   14426 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:40:16.709842   14426 notify.go:220] Checking for updates...
	I1030 11:40:16.717717   14426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:16.720732   14426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:40:16.724717   14426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:40:16.727723   14426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:40:16.730841   14426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:40:16.734099   14426 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:40:16.734172   14426 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:16.734224   14426 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:40:16.739310   14426 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:40:16.746714   14426 start.go:297] selected driver: qemu2
	I1030 11:40:16.746721   14426 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:40:16.746729   14426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:40:16.749274   14426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:40:16.752695   14426 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:40:16.755730   14426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:40:16.755746   14426 cni.go:84] Creating CNI manager for "kindnet"
	I1030 11:40:16.755754   14426 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 11:40:16.755798   14426 start.go:340] cluster config:
	{Name:kindnet-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:40:16.760194   14426 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:40:16.768744   14426 out.go:177] * Starting "kindnet-286000" primary control-plane node in "kindnet-286000" cluster
	I1030 11:40:16.772687   14426 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:40:16.772700   14426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:40:16.772710   14426 cache.go:56] Caching tarball of preloaded images
	I1030 11:40:16.772774   14426 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:40:16.772779   14426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:40:16.772827   14426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kindnet-286000/config.json ...
	I1030 11:40:16.772836   14426 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kindnet-286000/config.json: {Name:mk3e12e5202be5b539f3db0128801fe5d8ef8707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:16.773180   14426 start.go:360] acquireMachinesLock for kindnet-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:16.773226   14426 start.go:364] duration metric: took 41.958µs to acquireMachinesLock for "kindnet-286000"
	I1030 11:40:16.773236   14426 start.go:93] Provisioning new machine with config: &{Name:kindnet-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:16.773258   14426 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:16.776710   14426 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:16.791787   14426 start.go:159] libmachine.API.Create for "kindnet-286000" (driver="qemu2")
	I1030 11:40:16.791820   14426 client.go:168] LocalClient.Create starting
	I1030 11:40:16.791902   14426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:16.791948   14426 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:16.791961   14426 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:16.791996   14426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:16.792025   14426 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:16.792037   14426 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:16.792426   14426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:16.954318   14426 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:17.144947   14426 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:17.144969   14426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:17.145198   14426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:17.155678   14426 main.go:141] libmachine: STDOUT: 
	I1030 11:40:17.155704   14426 main.go:141] libmachine: STDERR: 
	I1030 11:40:17.155763   14426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2 +20000M
	I1030 11:40:17.164479   14426 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:17.164495   14426 main.go:141] libmachine: STDERR: 
	I1030 11:40:17.164508   14426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:17.164512   14426 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:17.164527   14426 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:17.164559   14426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:4c:1c:3d:c7:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:17.166331   14426 main.go:141] libmachine: STDOUT: 
	I1030 11:40:17.166343   14426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:17.166363   14426 client.go:171] duration metric: took 374.542125ms to LocalClient.Create
	I1030 11:40:19.168554   14426 start.go:128] duration metric: took 2.395290083s to createHost
	I1030 11:40:19.168642   14426 start.go:83] releasing machines lock for "kindnet-286000", held for 2.395433917s
	W1030 11:40:19.168694   14426 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:19.175404   14426 out.go:177] * Deleting "kindnet-286000" in qemu2 ...
	W1030 11:40:19.208666   14426 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:19.208745   14426 start.go:729] Will try again in 5 seconds ...
	I1030 11:40:24.211002   14426 start.go:360] acquireMachinesLock for kindnet-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:24.211620   14426 start.go:364] duration metric: took 520.041µs to acquireMachinesLock for "kindnet-286000"
	I1030 11:40:24.211699   14426 start.go:93] Provisioning new machine with config: &{Name:kindnet-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:24.212020   14426 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:24.217605   14426 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:24.265749   14426 start.go:159] libmachine.API.Create for "kindnet-286000" (driver="qemu2")
	I1030 11:40:24.265807   14426 client.go:168] LocalClient.Create starting
	I1030 11:40:24.265954   14426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:24.266044   14426 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:24.266065   14426 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:24.266143   14426 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:24.266217   14426 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:24.266231   14426 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:24.266888   14426 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:24.440104   14426 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:24.578411   14426 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:24.578419   14426 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:24.578617   14426 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:24.588558   14426 main.go:141] libmachine: STDOUT: 
	I1030 11:40:24.588582   14426 main.go:141] libmachine: STDERR: 
	I1030 11:40:24.588651   14426 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2 +20000M
	I1030 11:40:24.597074   14426 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:24.597100   14426 main.go:141] libmachine: STDERR: 
	I1030 11:40:24.597119   14426 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:24.597124   14426 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:24.597138   14426 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:24.597178   14426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:6c:39:76:15:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kindnet-286000/disk.qcow2
	I1030 11:40:24.599002   14426 main.go:141] libmachine: STDOUT: 
	I1030 11:40:24.599029   14426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:24.599044   14426 client.go:171] duration metric: took 333.233666ms to LocalClient.Create
	I1030 11:40:26.601238   14426 start.go:128] duration metric: took 2.389208417s to createHost
	I1030 11:40:26.601322   14426 start.go:83] releasing machines lock for "kindnet-286000", held for 2.389705292s
	W1030 11:40:26.601816   14426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:26.614513   14426 out.go:201] 
	W1030 11:40:26.618511   14426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:40:26.618546   14426 out.go:270] * 
	* 
	W1030 11:40:26.621196   14426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:40:26.630444   14426 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.939985209s)

                                                
                                                
-- stdout --
	* [calico-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-286000" primary control-plane node in "calico-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:40:29.163223   14539 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:40:29.163381   14539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:29.163385   14539 out.go:358] Setting ErrFile to fd 2...
	I1030 11:40:29.163387   14539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:29.163509   14539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:40:29.164694   14539 out.go:352] Setting JSON to false
	I1030 11:40:29.182529   14539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7800,"bootTime":1730305829,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:40:29.182592   14539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:40:29.189132   14539 out.go:177] * [calico-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:40:29.197130   14539 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:40:29.197236   14539 notify.go:220] Checking for updates...
	I1030 11:40:29.204103   14539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:29.207062   14539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:40:29.211113   14539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:40:29.214105   14539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:40:29.217016   14539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:40:29.220484   14539 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:40:29.220560   14539 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:29.220607   14539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:40:29.225038   14539 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:40:29.232070   14539 start.go:297] selected driver: qemu2
	I1030 11:40:29.232077   14539 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:40:29.232087   14539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:40:29.234601   14539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:40:29.238061   14539 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:40:29.241290   14539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:40:29.241314   14539 cni.go:84] Creating CNI manager for "calico"
	I1030 11:40:29.241318   14539 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1030 11:40:29.241365   14539 start.go:340] cluster config:
	{Name:calico-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:40:29.246114   14539 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:40:29.253962   14539 out.go:177] * Starting "calico-286000" primary control-plane node in "calico-286000" cluster
	I1030 11:40:29.258032   14539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:40:29.258044   14539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:40:29.258050   14539 cache.go:56] Caching tarball of preloaded images
	I1030 11:40:29.258109   14539 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:40:29.258114   14539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:40:29.258164   14539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/calico-286000/config.json ...
	I1030 11:40:29.258174   14539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/calico-286000/config.json: {Name:mk6579ae7cdffd423b023a3323b3804f63326a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:29.258401   14539 start.go:360] acquireMachinesLock for calico-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:29.258443   14539 start.go:364] duration metric: took 36.917µs to acquireMachinesLock for "calico-286000"
	I1030 11:40:29.258454   14539 start.go:93] Provisioning new machine with config: &{Name:calico-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:29.258478   14539 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:29.267065   14539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:29.281818   14539 start.go:159] libmachine.API.Create for "calico-286000" (driver="qemu2")
	I1030 11:40:29.281851   14539 client.go:168] LocalClient.Create starting
	I1030 11:40:29.281921   14539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:29.281958   14539 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:29.281971   14539 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:29.282010   14539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:29.282039   14539 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:29.282048   14539 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:29.282421   14539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:29.446871   14539 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:29.490817   14539 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:29.490824   14539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:29.491038   14539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:29.501324   14539 main.go:141] libmachine: STDOUT: 
	I1030 11:40:29.501348   14539 main.go:141] libmachine: STDERR: 
	I1030 11:40:29.501403   14539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2 +20000M
	I1030 11:40:29.510345   14539 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:29.510369   14539 main.go:141] libmachine: STDERR: 
	I1030 11:40:29.510400   14539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:29.510406   14539 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:29.510417   14539 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:29.510444   14539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:e9:32:fc:8d:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:29.512290   14539 main.go:141] libmachine: STDOUT: 
	I1030 11:40:29.512311   14539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:29.512331   14539 client.go:171] duration metric: took 230.476917ms to LocalClient.Create
	I1030 11:40:31.514498   14539 start.go:128] duration metric: took 2.25602325s to createHost
	I1030 11:40:31.514580   14539 start.go:83] releasing machines lock for "calico-286000", held for 2.256153625s
	W1030 11:40:31.514644   14539 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:31.524034   14539 out.go:177] * Deleting "calico-286000" in qemu2 ...
	W1030 11:40:31.551700   14539 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:31.551728   14539 start.go:729] Will try again in 5 seconds ...
	I1030 11:40:36.554001   14539 start.go:360] acquireMachinesLock for calico-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:36.554626   14539 start.go:364] duration metric: took 532.75µs to acquireMachinesLock for "calico-286000"
	I1030 11:40:36.554706   14539 start.go:93] Provisioning new machine with config: &{Name:calico-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:36.555046   14539 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:36.563592   14539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:36.612130   14539 start.go:159] libmachine.API.Create for "calico-286000" (driver="qemu2")
	I1030 11:40:36.612190   14539 client.go:168] LocalClient.Create starting
	I1030 11:40:36.612328   14539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:36.612404   14539 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:36.612419   14539 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:36.612486   14539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:36.612546   14539 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:36.612557   14539 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:36.613158   14539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:36.788246   14539 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:37.016186   14539 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:37.016212   14539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:37.016474   14539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:37.026740   14539 main.go:141] libmachine: STDOUT: 
	I1030 11:40:37.026756   14539 main.go:141] libmachine: STDERR: 
	I1030 11:40:37.026833   14539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2 +20000M
	I1030 11:40:37.035352   14539 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:37.035368   14539 main.go:141] libmachine: STDERR: 
	I1030 11:40:37.035382   14539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:37.035386   14539 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:37.035396   14539 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:37.035430   14539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3b:65:3c:4d:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/calico-286000/disk.qcow2
	I1030 11:40:37.037261   14539 main.go:141] libmachine: STDOUT: 
	I1030 11:40:37.037277   14539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:37.037290   14539 client.go:171] duration metric: took 425.100583ms to LocalClient.Create
	I1030 11:40:39.039346   14539 start.go:128] duration metric: took 2.484311375s to createHost
	I1030 11:40:39.039388   14539 start.go:83] releasing machines lock for "calico-286000", held for 2.484762292s
	W1030 11:40:39.039480   14539 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:39.048778   14539 out.go:201] 
	W1030 11:40:39.053718   14539 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:40:39.053728   14539 out.go:270] * 
	* 
	W1030 11:40:39.054328   14539 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:40:39.060732   14539 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.890722375s)

                                                
                                                
-- stdout --
	* [custom-flannel-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-286000" primary control-plane node in "custom-flannel-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:40:41.624900   14660 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:40:41.625062   14660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:41.625065   14660 out.go:358] Setting ErrFile to fd 2...
	I1030 11:40:41.625067   14660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:41.625187   14660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:40:41.626392   14660 out.go:352] Setting JSON to false
	I1030 11:40:41.644382   14660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7812,"bootTime":1730305829,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:40:41.644452   14660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:40:41.649434   14660 out.go:177] * [custom-flannel-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:40:41.656331   14660 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:40:41.656352   14660 notify.go:220] Checking for updates...
	I1030 11:40:41.663297   14660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:41.666365   14660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:40:41.670362   14660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:40:41.673391   14660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:40:41.676295   14660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:40:41.679698   14660 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:40:41.679771   14660 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:41.679818   14660 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:40:41.683292   14660 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:40:41.690351   14660 start.go:297] selected driver: qemu2
	I1030 11:40:41.690358   14660 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:40:41.690365   14660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:40:41.692904   14660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:40:41.697283   14660 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:40:41.700423   14660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:40:41.700439   14660 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1030 11:40:41.700448   14660 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1030 11:40:41.700477   14660 start.go:340] cluster config:
	{Name:custom-flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:40:41.705084   14660 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:40:41.712357   14660 out.go:177] * Starting "custom-flannel-286000" primary control-plane node in "custom-flannel-286000" cluster
	I1030 11:40:41.716367   14660 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:40:41.716382   14660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:40:41.716392   14660 cache.go:56] Caching tarball of preloaded images
	I1030 11:40:41.716466   14660 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:40:41.716472   14660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:40:41.716553   14660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/custom-flannel-286000/config.json ...
	I1030 11:40:41.716563   14660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/custom-flannel-286000/config.json: {Name:mk810fcf8ffdf2795606171976ec977b1f3d222e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:41.716818   14660 start.go:360] acquireMachinesLock for custom-flannel-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:41.716866   14660 start.go:364] duration metric: took 41.708µs to acquireMachinesLock for "custom-flannel-286000"
	I1030 11:40:41.716877   14660 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:41.716904   14660 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:41.720318   14660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:41.736009   14660 start.go:159] libmachine.API.Create for "custom-flannel-286000" (driver="qemu2")
	I1030 11:40:41.736036   14660 client.go:168] LocalClient.Create starting
	I1030 11:40:41.736108   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:41.736147   14660 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:41.736159   14660 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:41.736193   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:41.736225   14660 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:41.736231   14660 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:41.736586   14660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:41.899242   14660 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:42.110358   14660 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:42.110369   14660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:42.110592   14660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:42.120859   14660 main.go:141] libmachine: STDOUT: 
	I1030 11:40:42.120880   14660 main.go:141] libmachine: STDERR: 
	I1030 11:40:42.120951   14660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2 +20000M
	I1030 11:40:42.129879   14660 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:42.129894   14660 main.go:141] libmachine: STDERR: 
	I1030 11:40:42.129917   14660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:42.129923   14660 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:42.129933   14660 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:42.129968   14660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:26:db:a1:17:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:42.131778   14660 main.go:141] libmachine: STDOUT: 
	I1030 11:40:42.131793   14660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:42.131814   14660 client.go:171] duration metric: took 395.776875ms to LocalClient.Create
	I1030 11:40:44.133943   14660 start.go:128] duration metric: took 2.417047625s to createHost
	I1030 11:40:44.134019   14660 start.go:83] releasing machines lock for "custom-flannel-286000", held for 2.417175667s
	W1030 11:40:44.134047   14660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:44.142430   14660 out.go:177] * Deleting "custom-flannel-286000" in qemu2 ...
	W1030 11:40:44.163034   14660 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:44.163064   14660 start.go:729] Will try again in 5 seconds ...
	I1030 11:40:49.165228   14660 start.go:360] acquireMachinesLock for custom-flannel-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:49.165541   14660 start.go:364] duration metric: took 243.833µs to acquireMachinesLock for "custom-flannel-286000"
	I1030 11:40:49.165576   14660 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:49.165700   14660 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:49.176051   14660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:49.205776   14660 start.go:159] libmachine.API.Create for "custom-flannel-286000" (driver="qemu2")
	I1030 11:40:49.205820   14660 client.go:168] LocalClient.Create starting
	I1030 11:40:49.205958   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:49.206028   14660 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:49.206041   14660 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:49.206093   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:49.206139   14660 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:49.206154   14660 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:49.206733   14660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:49.375800   14660 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:49.415733   14660 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:49.415739   14660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:49.415909   14660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:49.425875   14660 main.go:141] libmachine: STDOUT: 
	I1030 11:40:49.425903   14660 main.go:141] libmachine: STDERR: 
	I1030 11:40:49.425965   14660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2 +20000M
	I1030 11:40:49.434628   14660 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:49.434645   14660 main.go:141] libmachine: STDERR: 
	I1030 11:40:49.434657   14660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:49.434661   14660 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:49.434670   14660 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:49.434707   14660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:7b:7b:7f:39:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/custom-flannel-286000/disk.qcow2
	I1030 11:40:49.436618   14660 main.go:141] libmachine: STDOUT: 
	I1030 11:40:49.436640   14660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:49.436655   14660 client.go:171] duration metric: took 230.832291ms to LocalClient.Create
	I1030 11:40:51.438904   14660 start.go:128] duration metric: took 2.273191459s to createHost
	I1030 11:40:51.438981   14660 start.go:83] releasing machines lock for "custom-flannel-286000", held for 2.273452958s
	W1030 11:40:51.439376   14660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:51.452024   14660 out.go:201] 
	W1030 11:40:51.455005   14660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:40:51.455027   14660 out.go:270] * 
	* 
	W1030 11:40:51.457118   14660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:40:51.468964   14660 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.783695667s)

                                                
                                                
-- stdout --
	* [false-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-286000" primary control-plane node in "false-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:40:54.060338   14777 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:40:54.060514   14777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:54.060517   14777 out.go:358] Setting ErrFile to fd 2...
	I1030 11:40:54.060520   14777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:40:54.060648   14777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:40:54.061817   14777 out.go:352] Setting JSON to false
	I1030 11:40:54.079634   14777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7825,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:40:54.079707   14777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:40:54.082627   14777 out.go:177] * [false-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:40:54.090299   14777 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:40:54.090358   14777 notify.go:220] Checking for updates...
	I1030 11:40:54.097144   14777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:40:54.100105   14777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:40:54.103052   14777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:40:54.106131   14777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:40:54.109183   14777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:40:54.110914   14777 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:40:54.110992   14777 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:40:54.111032   14777 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:40:54.114155   14777 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:40:54.121032   14777 start.go:297] selected driver: qemu2
	I1030 11:40:54.121039   14777 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:40:54.121048   14777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:40:54.123404   14777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:40:54.127096   14777 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:40:54.130247   14777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:40:54.130269   14777 cni.go:84] Creating CNI manager for "false"
	I1030 11:40:54.130309   14777 start.go:340] cluster config:
	{Name:false-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:40:54.134496   14777 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:40:54.143123   14777 out.go:177] * Starting "false-286000" primary control-plane node in "false-286000" cluster
	I1030 11:40:54.147153   14777 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:40:54.147164   14777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:40:54.147172   14777 cache.go:56] Caching tarball of preloaded images
	I1030 11:40:54.147238   14777 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:40:54.147243   14777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:40:54.147295   14777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/false-286000/config.json ...
	I1030 11:40:54.147305   14777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/false-286000/config.json: {Name:mkbf82354decad147b916122c57e39b5c2a0d08d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:40:54.147590   14777 start.go:360] acquireMachinesLock for false-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:40:54.147633   14777 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "false-286000"
	I1030 11:40:54.147660   14777 start.go:93] Provisioning new machine with config: &{Name:false-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:40:54.147682   14777 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:40:54.152185   14777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:40:54.166573   14777 start.go:159] libmachine.API.Create for "false-286000" (driver="qemu2")
	I1030 11:40:54.166593   14777 client.go:168] LocalClient.Create starting
	I1030 11:40:54.166665   14777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:40:54.166703   14777 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:54.166714   14777 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:54.166747   14777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:40:54.166776   14777 main.go:141] libmachine: Decoding PEM data...
	I1030 11:40:54.166783   14777 main.go:141] libmachine: Parsing certificate...
	I1030 11:40:54.167130   14777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:40:54.333366   14777 main.go:141] libmachine: Creating SSH key...
	I1030 11:40:54.402274   14777 main.go:141] libmachine: Creating Disk image...
	I1030 11:40:54.402282   14777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:40:54.402459   14777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:40:54.412438   14777 main.go:141] libmachine: STDOUT: 
	I1030 11:40:54.412460   14777 main.go:141] libmachine: STDERR: 
	I1030 11:40:54.412526   14777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2 +20000M
	I1030 11:40:54.421118   14777 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:40:54.421134   14777 main.go:141] libmachine: STDERR: 
	I1030 11:40:54.421148   14777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:40:54.421153   14777 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:40:54.421167   14777 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:40:54.421195   14777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:8c:f3:5d:49:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:40:54.423052   14777 main.go:141] libmachine: STDOUT: 
	I1030 11:40:54.423073   14777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:40:54.423096   14777 client.go:171] duration metric: took 256.500667ms to LocalClient.Create
	I1030 11:40:56.425279   14777 start.go:128] duration metric: took 2.277597625s to createHost
	I1030 11:40:56.425348   14777 start.go:83] releasing machines lock for "false-286000", held for 2.277735s
	W1030 11:40:56.425440   14777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:56.439569   14777 out.go:177] * Deleting "false-286000" in qemu2 ...
	W1030 11:40:56.466966   14777 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:40:56.466997   14777 start.go:729] Will try again in 5 seconds ...
	I1030 11:41:01.469030   14777 start.go:360] acquireMachinesLock for false-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:01.469174   14777 start.go:364] duration metric: took 126.583µs to acquireMachinesLock for "false-286000"
	I1030 11:41:01.469191   14777 start.go:93] Provisioning new machine with config: &{Name:false-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:01.469254   14777 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:01.478394   14777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:01.493350   14777 start.go:159] libmachine.API.Create for "false-286000" (driver="qemu2")
	I1030 11:41:01.493377   14777 client.go:168] LocalClient.Create starting
	I1030 11:41:01.493451   14777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:01.493504   14777 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:01.493514   14777 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:01.493538   14777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:01.493566   14777 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:01.493572   14777 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:01.493965   14777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:01.658627   14777 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:01.748692   14777 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:01.748704   14777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:01.748890   14777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:41:01.759741   14777 main.go:141] libmachine: STDOUT: 
	I1030 11:41:01.759768   14777 main.go:141] libmachine: STDERR: 
	I1030 11:41:01.759846   14777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2 +20000M
	I1030 11:41:01.769084   14777 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:01.769106   14777 main.go:141] libmachine: STDERR: 
	I1030 11:41:01.769117   14777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:41:01.769132   14777 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:01.769142   14777 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:01.769171   14777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:5d:89:53:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/false-286000/disk.qcow2
	I1030 11:41:01.771049   14777 main.go:141] libmachine: STDOUT: 
	I1030 11:41:01.771070   14777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:01.771083   14777 client.go:171] duration metric: took 277.705416ms to LocalClient.Create
	I1030 11:41:03.773346   14777 start.go:128] duration metric: took 2.304090292s to createHost
	I1030 11:41:03.773419   14777 start.go:83] releasing machines lock for "false-286000", held for 2.304261417s
	W1030 11:41:03.773730   14777 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:03.785282   14777 out.go:201] 
	W1030 11:41:03.790283   14777 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:41:03.790314   14777 out.go:270] * 
	* 
	W1030 11:41:03.793161   14777 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:41:03.800266   14777 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.787894875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-286000" primary control-plane node in "enable-default-cni-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:41:06.168185   14888 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:41:06.168331   14888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:06.168334   14888 out.go:358] Setting ErrFile to fd 2...
	I1030 11:41:06.168337   14888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:06.168478   14888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:41:06.169683   14888 out.go:352] Setting JSON to false
	I1030 11:41:06.187765   14888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7837,"bootTime":1730305829,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:41:06.187848   14888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:41:06.194494   14888 out.go:177] * [enable-default-cni-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:41:06.201524   14888 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:41:06.201585   14888 notify.go:220] Checking for updates...
	I1030 11:41:06.207491   14888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:41:06.210538   14888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:41:06.213520   14888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:41:06.216518   14888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:41:06.219521   14888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:41:06.222732   14888 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:41:06.222809   14888 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:41:06.222870   14888 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:41:06.227471   14888 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:41:06.234467   14888 start.go:297] selected driver: qemu2
	I1030 11:41:06.234473   14888 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:41:06.234480   14888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:41:06.236845   14888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:41:06.241460   14888 out.go:177] * Automatically selected the socket_vmnet network
	E1030 11:41:06.244605   14888 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1030 11:41:06.244620   14888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:41:06.244641   14888 cni.go:84] Creating CNI manager for "bridge"
	I1030 11:41:06.244659   14888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:41:06.244687   14888 start.go:340] cluster config:
	{Name:enable-default-cni-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:41:06.249119   14888 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:41:06.257514   14888 out.go:177] * Starting "enable-default-cni-286000" primary control-plane node in "enable-default-cni-286000" cluster
	I1030 11:41:06.261385   14888 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:41:06.261399   14888 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:41:06.261414   14888 cache.go:56] Caching tarball of preloaded images
	I1030 11:41:06.261486   14888 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:41:06.261491   14888 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:41:06.261552   14888 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/enable-default-cni-286000/config.json ...
	I1030 11:41:06.261569   14888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/enable-default-cni-286000/config.json: {Name:mk90acc5e0a005bf7c21c47ddffce1bde389f009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:41:06.261812   14888 start.go:360] acquireMachinesLock for enable-default-cni-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:06.261857   14888 start.go:364] duration metric: took 38.75µs to acquireMachinesLock for "enable-default-cni-286000"
	I1030 11:41:06.261868   14888 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:06.261894   14888 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:06.270470   14888 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:06.286274   14888 start.go:159] libmachine.API.Create for "enable-default-cni-286000" (driver="qemu2")
	I1030 11:41:06.286302   14888 client.go:168] LocalClient.Create starting
	I1030 11:41:06.286372   14888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:06.286409   14888 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:06.286418   14888 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:06.286452   14888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:06.286485   14888 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:06.286491   14888 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:06.286916   14888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:06.449588   14888 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:06.479938   14888 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:06.479945   14888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:06.480165   14888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:06.490310   14888 main.go:141] libmachine: STDOUT: 
	I1030 11:41:06.490333   14888 main.go:141] libmachine: STDERR: 
	I1030 11:41:06.490416   14888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2 +20000M
	I1030 11:41:06.499376   14888 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:06.499393   14888 main.go:141] libmachine: STDERR: 
	I1030 11:41:06.499410   14888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:06.499416   14888 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:06.499429   14888 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:06.499459   14888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0b:d0:c1:3e:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:06.501336   14888 main.go:141] libmachine: STDOUT: 
	I1030 11:41:06.501350   14888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:06.501368   14888 client.go:171] duration metric: took 215.062125ms to LocalClient.Create
	I1030 11:41:08.502186   14888 start.go:128] duration metric: took 2.240309417s to createHost
	I1030 11:41:08.502234   14888 start.go:83] releasing machines lock for "enable-default-cni-286000", held for 2.240398458s
	W1030 11:41:08.502249   14888 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:08.506408   14888 out.go:177] * Deleting "enable-default-cni-286000" in qemu2 ...
	W1030 11:41:08.524089   14888 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:08.524099   14888 start.go:729] Will try again in 5 seconds ...
	I1030 11:41:13.526250   14888 start.go:360] acquireMachinesLock for enable-default-cni-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:13.526598   14888 start.go:364] duration metric: took 275.25µs to acquireMachinesLock for "enable-default-cni-286000"
	I1030 11:41:13.526648   14888 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:13.526776   14888 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:13.544234   14888 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:13.576758   14888 start.go:159] libmachine.API.Create for "enable-default-cni-286000" (driver="qemu2")
	I1030 11:41:13.576805   14888 client.go:168] LocalClient.Create starting
	I1030 11:41:13.576928   14888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:13.577003   14888 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:13.577017   14888 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:13.577067   14888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:13.577121   14888 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:13.577132   14888 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:13.577738   14888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:13.748582   14888 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:13.856097   14888 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:13.856108   14888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:13.856301   14888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:13.866534   14888 main.go:141] libmachine: STDOUT: 
	I1030 11:41:13.866566   14888 main.go:141] libmachine: STDERR: 
	I1030 11:41:13.866634   14888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2 +20000M
	I1030 11:41:13.875481   14888 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:13.875502   14888 main.go:141] libmachine: STDERR: 
	I1030 11:41:13.875521   14888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:13.875527   14888 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:13.875538   14888 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:13.875583   14888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ca:a5:de:77:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/enable-default-cni-286000/disk.qcow2
	I1030 11:41:13.877881   14888 main.go:141] libmachine: STDOUT: 
	I1030 11:41:13.877910   14888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:13.877931   14888 client.go:171] duration metric: took 301.123ms to LocalClient.Create
	I1030 11:41:15.880111   14888 start.go:128] duration metric: took 2.353328541s to createHost
	I1030 11:41:15.880211   14888 start.go:83] releasing machines lock for "enable-default-cni-286000", held for 2.35361825s
	W1030 11:41:15.880625   14888 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:15.894396   14888 out.go:201] 
	W1030 11:41:15.897518   14888 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:41:15.897545   14888 out.go:270] * 
	* 
	W1030 11:41:15.900338   14888 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:41:15.911400   14888 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.88207875s)

                                                
                                                
-- stdout --
	* [flannel-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-286000" primary control-plane node in "flannel-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:41:18.333688   14997 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:41:18.333834   14997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:18.333837   14997 out.go:358] Setting ErrFile to fd 2...
	I1030 11:41:18.333839   14997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:18.333967   14997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:41:18.335209   14997 out.go:352] Setting JSON to false
	I1030 11:41:18.353818   14997 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7849,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:41:18.353893   14997 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:41:18.358387   14997 out.go:177] * [flannel-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:41:18.365250   14997 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:41:18.365286   14997 notify.go:220] Checking for updates...
	I1030 11:41:18.372243   14997 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:41:18.375292   14997 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:41:18.378255   14997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:41:18.381278   14997 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:41:18.384251   14997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:41:18.387634   14997 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:41:18.387700   14997 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:41:18.387743   14997 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:41:18.392243   14997 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:41:18.399260   14997 start.go:297] selected driver: qemu2
	I1030 11:41:18.399264   14997 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:41:18.399269   14997 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:41:18.401764   14997 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:41:18.406180   14997 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:41:18.409371   14997 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:41:18.409392   14997 cni.go:84] Creating CNI manager for "flannel"
	I1030 11:41:18.409402   14997 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1030 11:41:18.409429   14997 start.go:340] cluster config:
	{Name:flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:41:18.413876   14997 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:41:18.422249   14997 out.go:177] * Starting "flannel-286000" primary control-plane node in "flannel-286000" cluster
	I1030 11:41:18.426275   14997 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:41:18.426287   14997 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:41:18.426295   14997 cache.go:56] Caching tarball of preloaded images
	I1030 11:41:18.426356   14997 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:41:18.426361   14997 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:41:18.426404   14997 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/flannel-286000/config.json ...
	I1030 11:41:18.426414   14997 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/flannel-286000/config.json: {Name:mkcbc70e5d468bd6d7d06a6a7ad0fd29278f27bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:41:18.426768   14997 start.go:360] acquireMachinesLock for flannel-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:18.426813   14997 start.go:364] duration metric: took 39.541µs to acquireMachinesLock for "flannel-286000"
	I1030 11:41:18.426824   14997 start.go:93] Provisioning new machine with config: &{Name:flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:18.426856   14997 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:18.430239   14997 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:18.445047   14997 start.go:159] libmachine.API.Create for "flannel-286000" (driver="qemu2")
	I1030 11:41:18.445072   14997 client.go:168] LocalClient.Create starting
	I1030 11:41:18.445142   14997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:18.445183   14997 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:18.445194   14997 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:18.445228   14997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:18.445256   14997 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:18.445266   14997 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:18.445627   14997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:18.609102   14997 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:18.737594   14997 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:18.737605   14997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:18.737804   14997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:18.747711   14997 main.go:141] libmachine: STDOUT: 
	I1030 11:41:18.747729   14997 main.go:141] libmachine: STDERR: 
	I1030 11:41:18.747784   14997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2 +20000M
	I1030 11:41:18.756516   14997 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:18.756529   14997 main.go:141] libmachine: STDERR: 
	I1030 11:41:18.756544   14997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:18.756550   14997 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:18.756562   14997 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:18.756596   14997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4a:88:a1:a7:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:18.758524   14997 main.go:141] libmachine: STDOUT: 
	I1030 11:41:18.758542   14997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:18.758560   14997 client.go:171] duration metric: took 313.48675ms to LocalClient.Create
	I1030 11:41:20.760963   14997 start.go:128] duration metric: took 2.334001542s to createHost
	I1030 11:41:20.761069   14997 start.go:83] releasing machines lock for "flannel-286000", held for 2.33427425s
	W1030 11:41:20.761126   14997 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:20.776394   14997 out.go:177] * Deleting "flannel-286000" in qemu2 ...
	W1030 11:41:20.801930   14997 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:20.801972   14997 start.go:729] Will try again in 5 seconds ...
	I1030 11:41:25.804093   14997 start.go:360] acquireMachinesLock for flannel-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:25.804424   14997 start.go:364] duration metric: took 286.625µs to acquireMachinesLock for "flannel-286000"
	I1030 11:41:25.804461   14997 start.go:93] Provisioning new machine with config: &{Name:flannel-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:25.804547   14997 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:25.814063   14997 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:25.845117   14997 start.go:159] libmachine.API.Create for "flannel-286000" (driver="qemu2")
	I1030 11:41:25.845157   14997 client.go:168] LocalClient.Create starting
	I1030 11:41:25.845274   14997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:25.845342   14997 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:25.845380   14997 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:25.845436   14997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:25.845484   14997 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:25.845500   14997 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:25.846045   14997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:26.016657   14997 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:26.114086   14997 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:26.114096   14997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:26.114320   14997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:26.124584   14997 main.go:141] libmachine: STDOUT: 
	I1030 11:41:26.124605   14997 main.go:141] libmachine: STDERR: 
	I1030 11:41:26.124688   14997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2 +20000M
	I1030 11:41:26.133399   14997 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:26.133430   14997 main.go:141] libmachine: STDERR: 
	I1030 11:41:26.133439   14997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:26.133445   14997 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:26.133454   14997 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:26.133487   14997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:0e:fe:a8:88:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/flannel-286000/disk.qcow2
	I1030 11:41:26.135363   14997 main.go:141] libmachine: STDOUT: 
	I1030 11:41:26.135377   14997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:26.135388   14997 client.go:171] duration metric: took 290.229042ms to LocalClient.Create
	I1030 11:41:28.137585   14997 start.go:128] duration metric: took 2.333019334s to createHost
	I1030 11:41:28.137683   14997 start.go:83] releasing machines lock for "flannel-286000", held for 2.333270583s
	W1030 11:41:28.138033   14997 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:28.148694   14997 out.go:201] 
	W1030 11:41:28.155798   14997 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:41:28.155824   14997 out.go:270] * 
	* 
	W1030 11:41:28.158274   14997 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:41:28.167654   14997 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.744484209s)

                                                
                                                
-- stdout --
	* [bridge-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-286000" primary control-plane node in "bridge-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:41:30.775855   15116 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:41:30.776013   15116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:30.776016   15116 out.go:358] Setting ErrFile to fd 2...
	I1030 11:41:30.776019   15116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:30.776152   15116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:41:30.777309   15116 out.go:352] Setting JSON to false
	I1030 11:41:30.796222   15116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7861,"bootTime":1730305829,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:41:30.796311   15116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:41:30.801125   15116 out.go:177] * [bridge-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:41:30.809412   15116 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:41:30.809468   15116 notify.go:220] Checking for updates...
	I1030 11:41:30.818237   15116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:41:30.821293   15116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:41:30.825268   15116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:41:30.828287   15116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:41:30.831274   15116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:41:30.834670   15116 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:41:30.834737   15116 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:41:30.834783   15116 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:41:30.839259   15116 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:41:30.846235   15116 start.go:297] selected driver: qemu2
	I1030 11:41:30.846240   15116 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:41:30.846246   15116 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:41:30.848819   15116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:41:30.852222   15116 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:41:30.855312   15116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:41:30.855328   15116 cni.go:84] Creating CNI manager for "bridge"
	I1030 11:41:30.855331   15116 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:41:30.855362   15116 start.go:340] cluster config:
	{Name:bridge-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:41:30.859923   15116 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:41:30.868239   15116 out.go:177] * Starting "bridge-286000" primary control-plane node in "bridge-286000" cluster
	I1030 11:41:30.871281   15116 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:41:30.871297   15116 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:41:30.871309   15116 cache.go:56] Caching tarball of preloaded images
	I1030 11:41:30.871396   15116 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:41:30.871401   15116 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:41:30.871469   15116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/bridge-286000/config.json ...
	I1030 11:41:30.871480   15116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/bridge-286000/config.json: {Name:mk3709b5c16f83978fc5a76416292ba88fd5f9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:41:30.871841   15116 start.go:360] acquireMachinesLock for bridge-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:30.871896   15116 start.go:364] duration metric: took 50.209µs to acquireMachinesLock for "bridge-286000"
	I1030 11:41:30.871908   15116 start.go:93] Provisioning new machine with config: &{Name:bridge-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:30.871940   15116 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:30.879155   15116 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:30.894826   15116 start.go:159] libmachine.API.Create for "bridge-286000" (driver="qemu2")
	I1030 11:41:30.894855   15116 client.go:168] LocalClient.Create starting
	I1030 11:41:30.894940   15116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:30.894980   15116 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:30.894991   15116 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:30.895031   15116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:30.895060   15116 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:30.895070   15116 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:30.895524   15116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:31.059628   15116 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:31.098440   15116 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:31.098446   15116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:31.098619   15116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:31.108935   15116 main.go:141] libmachine: STDOUT: 
	I1030 11:41:31.108965   15116 main.go:141] libmachine: STDERR: 
	I1030 11:41:31.109029   15116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2 +20000M
	I1030 11:41:31.117604   15116 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:31.117619   15116 main.go:141] libmachine: STDERR: 
	I1030 11:41:31.117644   15116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:31.117649   15116 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:31.117661   15116 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:31.117697   15116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:78:34:a0:a7:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:31.119548   15116 main.go:141] libmachine: STDOUT: 
	I1030 11:41:31.119562   15116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:31.119590   15116 client.go:171] duration metric: took 224.732209ms to LocalClient.Create
	I1030 11:41:33.121672   15116 start.go:128] duration metric: took 2.249745542s to createHost
	I1030 11:41:33.121716   15116 start.go:83] releasing machines lock for "bridge-286000", held for 2.24984125s
	W1030 11:41:33.121755   15116 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:33.132751   15116 out.go:177] * Deleting "bridge-286000" in qemu2 ...
	W1030 11:41:33.150946   15116 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:33.150955   15116 start.go:729] Will try again in 5 seconds ...
	I1030 11:41:38.153184   15116 start.go:360] acquireMachinesLock for bridge-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:38.153818   15116 start.go:364] duration metric: took 522.875µs to acquireMachinesLock for "bridge-286000"
	I1030 11:41:38.153981   15116 start.go:93] Provisioning new machine with config: &{Name:bridge-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:38.154232   15116 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:38.164881   15116 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:38.213212   15116 start.go:159] libmachine.API.Create for "bridge-286000" (driver="qemu2")
	I1030 11:41:38.213269   15116 client.go:168] LocalClient.Create starting
	I1030 11:41:38.213408   15116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:38.213553   15116 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:38.213572   15116 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:38.213637   15116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:38.213698   15116 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:38.213709   15116 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:38.214394   15116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:38.389047   15116 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:38.431825   15116 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:38.431831   15116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:38.432015   15116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:38.442230   15116 main.go:141] libmachine: STDOUT: 
	I1030 11:41:38.442253   15116 main.go:141] libmachine: STDERR: 
	I1030 11:41:38.442332   15116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2 +20000M
	I1030 11:41:38.450925   15116 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:38.450942   15116 main.go:141] libmachine: STDERR: 
	I1030 11:41:38.450955   15116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:38.450960   15116 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:38.450971   15116 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:38.451009   15116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:d2:cf:63:51:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/bridge-286000/disk.qcow2
	I1030 11:41:38.452878   15116 main.go:141] libmachine: STDOUT: 
	I1030 11:41:38.452899   15116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:38.452911   15116 client.go:171] duration metric: took 239.638459ms to LocalClient.Create
	I1030 11:41:40.454984   15116 start.go:128] duration metric: took 2.300755916s to createHost
	I1030 11:41:40.455047   15116 start.go:83] releasing machines lock for "bridge-286000", held for 2.301236125s
	W1030 11:41:40.455210   15116 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:40.468526   15116 out.go:201] 
	W1030 11:41:40.471558   15116 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:41:40.471567   15116 out.go:270] * 
	* 
	W1030 11:41:40.472358   15116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:41:40.477500   15116 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-286000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.812453458s)

                                                
                                                
-- stdout --
	* [kubenet-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-286000" primary control-plane node in "kubenet-286000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-286000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:41:42.837697   15225 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:41:42.837851   15225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:42.837854   15225 out.go:358] Setting ErrFile to fd 2...
	I1030 11:41:42.837857   15225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:42.837976   15225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:41:42.839160   15225 out.go:352] Setting JSON to false
	I1030 11:41:42.857256   15225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7873,"bootTime":1730305829,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:41:42.857360   15225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:41:42.863953   15225 out.go:177] * [kubenet-286000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:41:42.870971   15225 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:41:42.870997   15225 notify.go:220] Checking for updates...
	I1030 11:41:42.880900   15225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:41:42.883894   15225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:41:42.887889   15225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:41:42.890880   15225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:41:42.893832   15225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:41:42.897174   15225 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:41:42.897244   15225 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:41:42.897290   15225 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:41:42.901879   15225 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:41:42.908894   15225 start.go:297] selected driver: qemu2
	I1030 11:41:42.908900   15225 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:41:42.908905   15225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:41:42.911259   15225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:41:42.914863   15225 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:41:42.917971   15225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:41:42.917987   15225 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1030 11:41:42.918010   15225 start.go:340] cluster config:
	{Name:kubenet-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:41:42.922359   15225 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:41:42.930909   15225 out.go:177] * Starting "kubenet-286000" primary control-plane node in "kubenet-286000" cluster
	I1030 11:41:42.934880   15225 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:41:42.934896   15225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:41:42.934907   15225 cache.go:56] Caching tarball of preloaded images
	I1030 11:41:42.934993   15225 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:41:42.934999   15225 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:41:42.935064   15225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kubenet-286000/config.json ...
	I1030 11:41:42.935074   15225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/kubenet-286000/config.json: {Name:mk1d8bfff7675d888fad6852f816646605aae5c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:41:42.935421   15225 start.go:360] acquireMachinesLock for kubenet-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:42.935466   15225 start.go:364] duration metric: took 39.291µs to acquireMachinesLock for "kubenet-286000"
	I1030 11:41:42.935477   15225 start.go:93] Provisioning new machine with config: &{Name:kubenet-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:42.935511   15225 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:42.938893   15225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:42.954562   15225 start.go:159] libmachine.API.Create for "kubenet-286000" (driver="qemu2")
	I1030 11:41:42.954599   15225 client.go:168] LocalClient.Create starting
	I1030 11:41:42.954675   15225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:42.954715   15225 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:42.954728   15225 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:42.954769   15225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:42.954797   15225 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:42.954805   15225 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:42.955271   15225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:43.120664   15225 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:43.191293   15225 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:43.191300   15225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:43.191485   15225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:43.201323   15225 main.go:141] libmachine: STDOUT: 
	I1030 11:41:43.201336   15225 main.go:141] libmachine: STDERR: 
	I1030 11:41:43.201401   15225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2 +20000M
	I1030 11:41:43.209738   15225 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:43.209754   15225 main.go:141] libmachine: STDERR: 
	I1030 11:41:43.209769   15225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:43.209776   15225 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:43.209789   15225 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:43.209822   15225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:d5:59:4b:e7:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:43.211685   15225 main.go:141] libmachine: STDOUT: 
	I1030 11:41:43.211697   15225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:43.211714   15225 client.go:171] duration metric: took 257.113291ms to LocalClient.Create
	I1030 11:41:45.213885   15225 start.go:128] duration metric: took 2.278372666s to createHost
	I1030 11:41:45.213974   15225 start.go:83] releasing machines lock for "kubenet-286000", held for 2.278526917s
	W1030 11:41:45.214029   15225 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:45.228990   15225 out.go:177] * Deleting "kubenet-286000" in qemu2 ...
	W1030 11:41:45.255415   15225 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:45.255444   15225 start.go:729] Will try again in 5 seconds ...
	I1030 11:41:50.257645   15225 start.go:360] acquireMachinesLock for kubenet-286000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:50.258148   15225 start.go:364] duration metric: took 400.875µs to acquireMachinesLock for "kubenet-286000"
	I1030 11:41:50.258217   15225 start.go:93] Provisioning new machine with config: &{Name:kubenet-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:50.258430   15225 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:50.269082   15225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1030 11:41:50.310210   15225 start.go:159] libmachine.API.Create for "kubenet-286000" (driver="qemu2")
	I1030 11:41:50.310261   15225 client.go:168] LocalClient.Create starting
	I1030 11:41:50.310386   15225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:50.310475   15225 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:50.310495   15225 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:50.310566   15225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:50.310616   15225 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:50.310629   15225 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:50.311135   15225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:50.485217   15225 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:50.562379   15225 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:50.562389   15225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:50.562589   15225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:50.572481   15225 main.go:141] libmachine: STDOUT: 
	I1030 11:41:50.572504   15225 main.go:141] libmachine: STDERR: 
	I1030 11:41:50.572558   15225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2 +20000M
	I1030 11:41:50.581088   15225 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:50.581104   15225 main.go:141] libmachine: STDERR: 
	I1030 11:41:50.581114   15225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:50.581120   15225 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:50.581129   15225 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:50.581155   15225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:74:a9:39:4c:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/kubenet-286000/disk.qcow2
	I1030 11:41:50.582929   15225 main.go:141] libmachine: STDOUT: 
	I1030 11:41:50.582944   15225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:50.582957   15225 client.go:171] duration metric: took 272.692625ms to LocalClient.Create
	I1030 11:41:52.585086   15225 start.go:128] duration metric: took 2.326641583s to createHost
	I1030 11:41:52.585130   15225 start.go:83] releasing machines lock for "kubenet-286000", held for 2.32699175s
	W1030 11:41:52.585338   15225 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-286000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:52.594822   15225 out.go:201] 
	W1030 11:41:52.598805   15225 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:41:52.598819   15225 out.go:270] * 
	* 
	W1030 11:41:52.600038   15225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:41:52.607766   15225 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.156836708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-239000" primary control-plane node in "old-k8s-version-239000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-239000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:41:54.975849   15342 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:41:54.976008   15342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:54.976012   15342 out.go:358] Setting ErrFile to fd 2...
	I1030 11:41:54.976014   15342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:41:54.976144   15342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:41:54.977347   15342 out.go:352] Setting JSON to false
	I1030 11:41:54.995523   15342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7885,"bootTime":1730305829,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:41:54.995593   15342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:41:55.000282   15342 out.go:177] * [old-k8s-version-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:41:55.008250   15342 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:41:55.008304   15342 notify.go:220] Checking for updates...
	I1030 11:41:55.016168   15342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:41:55.019257   15342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:41:55.020719   15342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:41:55.024190   15342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:41:55.027217   15342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:41:55.030662   15342 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:41:55.030735   15342 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:41:55.030774   15342 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:41:55.035170   15342 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:41:55.042278   15342 start.go:297] selected driver: qemu2
	I1030 11:41:55.042285   15342 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:41:55.042293   15342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:41:55.044732   15342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:41:55.049185   15342 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:41:55.052239   15342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:41:55.052256   15342 cni.go:84] Creating CNI manager for ""
	I1030 11:41:55.052275   15342 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1030 11:41:55.052308   15342 start.go:340] cluster config:
	{Name:old-k8s-version-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:41:55.056565   15342 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:41:55.065210   15342 out.go:177] * Starting "old-k8s-version-239000" primary control-plane node in "old-k8s-version-239000" cluster
	I1030 11:41:55.069206   15342 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:41:55.069221   15342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:41:55.069228   15342 cache.go:56] Caching tarball of preloaded images
	I1030 11:41:55.069299   15342 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:41:55.069305   15342 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1030 11:41:55.069368   15342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/old-k8s-version-239000/config.json ...
	I1030 11:41:55.069378   15342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/old-k8s-version-239000/config.json: {Name:mk69ab5a7e5cc7063d77c4356d8f9523f6e0e7d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:41:55.069610   15342 start.go:360] acquireMachinesLock for old-k8s-version-239000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:41:55.069654   15342 start.go:364] duration metric: took 36.458µs to acquireMachinesLock for "old-k8s-version-239000"
	I1030 11:41:55.069665   15342 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:41:55.069691   15342 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:41:55.078195   15342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:41:55.093326   15342 start.go:159] libmachine.API.Create for "old-k8s-version-239000" (driver="qemu2")
	I1030 11:41:55.093355   15342 client.go:168] LocalClient.Create starting
	I1030 11:41:55.093425   15342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:41:55.093470   15342 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:55.093480   15342 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:55.093516   15342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:41:55.093546   15342 main.go:141] libmachine: Decoding PEM data...
	I1030 11:41:55.093553   15342 main.go:141] libmachine: Parsing certificate...
	I1030 11:41:55.093918   15342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:41:55.257432   15342 main.go:141] libmachine: Creating SSH key...
	I1030 11:41:55.446890   15342 main.go:141] libmachine: Creating Disk image...
	I1030 11:41:55.446902   15342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:41:55.447141   15342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:41:55.457498   15342 main.go:141] libmachine: STDOUT: 
	I1030 11:41:55.457522   15342 main.go:141] libmachine: STDERR: 
	I1030 11:41:55.457588   15342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2 +20000M
	I1030 11:41:55.466530   15342 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:41:55.466561   15342 main.go:141] libmachine: STDERR: 
	I1030 11:41:55.466575   15342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:41:55.466579   15342 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:41:55.466594   15342 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:41:55.466622   15342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f5:71:4a:b9:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:41:55.468603   15342 main.go:141] libmachine: STDOUT: 
	I1030 11:41:55.468617   15342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:41:55.468639   15342 client.go:171] duration metric: took 375.279959ms to LocalClient.Create
	I1030 11:41:57.470841   15342 start.go:128] duration metric: took 2.4011455s to createHost
	I1030 11:41:57.470931   15342 start.go:83] releasing machines lock for "old-k8s-version-239000", held for 2.401295917s
	W1030 11:41:57.470987   15342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:57.487360   15342 out.go:177] * Deleting "old-k8s-version-239000" in qemu2 ...
	W1030 11:41:57.514194   15342 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:41:57.514226   15342 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:02.516330   15342 start.go:360] acquireMachinesLock for old-k8s-version-239000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:02.516838   15342 start.go:364] duration metric: took 430.625µs to acquireMachinesLock for "old-k8s-version-239000"
	I1030 11:42:02.516958   15342 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:02.517192   15342 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:02.525759   15342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:02.563918   15342 start.go:159] libmachine.API.Create for "old-k8s-version-239000" (driver="qemu2")
	I1030 11:42:02.563982   15342 client.go:168] LocalClient.Create starting
	I1030 11:42:02.564175   15342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:02.564270   15342 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:02.564289   15342 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:02.564370   15342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:02.564421   15342 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:02.564432   15342 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:02.565111   15342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:02.823864   15342 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:03.035649   15342 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:03.035663   15342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:03.035885   15342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:42:03.046002   15342 main.go:141] libmachine: STDOUT: 
	I1030 11:42:03.046019   15342 main.go:141] libmachine: STDERR: 
	I1030 11:42:03.046074   15342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2 +20000M
	I1030 11:42:03.054548   15342 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:03.054573   15342 main.go:141] libmachine: STDERR: 
	I1030 11:42:03.054596   15342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:42:03.054604   15342 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:03.054612   15342 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:03.054655   15342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:3f:bf:0b:1e:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:42:03.056597   15342 main.go:141] libmachine: STDOUT: 
	I1030 11:42:03.056610   15342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:03.056623   15342 client.go:171] duration metric: took 492.623ms to LocalClient.Create
	I1030 11:42:05.058715   15342 start.go:128] duration metric: took 2.541527375s to createHost
	I1030 11:42:05.058748   15342 start.go:83] releasing machines lock for "old-k8s-version-239000", held for 2.541926875s
	W1030 11:42:05.058936   15342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:05.070200   15342 out.go:201] 
	W1030 11:42:05.075082   15342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:05.075090   15342 out.go:270] * 
	* 
	W1030 11:42:05.075859   15342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:05.090175   15342 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (41.949667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-239000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-239000 create -f testdata/busybox.yaml: exit status 1 (27.8025ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-239000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.234375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.15325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-239000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-239000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-239000 describe deploy/metrics-server -n kube-system: exit status 1 (28.78125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-239000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.151917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1859725s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-239000" primary control-plane node in "old-k8s-version-239000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:08.935939   15395 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:08.936082   15395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:08.936089   15395 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:08.936092   15395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:08.936220   15395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:08.937324   15395 out.go:352] Setting JSON to false
	I1030 11:42:08.956517   15395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7899,"bootTime":1730305829,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:08.956595   15395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:08.961361   15395 out.go:177] * [old-k8s-version-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:08.969381   15395 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:08.969445   15395 notify.go:220] Checking for updates...
	I1030 11:42:08.976287   15395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:08.979349   15395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:08.982398   15395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:08.985416   15395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:08.988313   15395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:08.991738   15395 config.go:182] Loaded profile config "old-k8s-version-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1030 11:42:08.995345   15395 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1030 11:42:08.998336   15395 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:09.002389   15395 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:42:09.009486   15395 start.go:297] selected driver: qemu2
	I1030 11:42:09.009535   15395 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:09.009618   15395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:09.012381   15395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:09.012404   15395 cni.go:84] Creating CNI manager for ""
	I1030 11:42:09.012422   15395 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1030 11:42:09.012452   15395 start.go:340] cluster config:
	{Name:old-k8s-version-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:09.016779   15395 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:09.024340   15395 out.go:177] * Starting "old-k8s-version-239000" primary control-plane node in "old-k8s-version-239000" cluster
	I1030 11:42:09.027395   15395 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:42:09.027407   15395 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:42:09.027413   15395 cache.go:56] Caching tarball of preloaded images
	I1030 11:42:09.027477   15395 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:42:09.027482   15395 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1030 11:42:09.027528   15395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/old-k8s-version-239000/config.json ...
	I1030 11:42:09.027964   15395 start.go:360] acquireMachinesLock for old-k8s-version-239000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:09.027992   15395 start.go:364] duration metric: took 22.334µs to acquireMachinesLock for "old-k8s-version-239000"
	I1030 11:42:09.028000   15395 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:09.028005   15395 fix.go:54] fixHost starting: 
	I1030 11:42:09.028115   15395 fix.go:112] recreateIfNeeded on old-k8s-version-239000: state=Stopped err=<nil>
	W1030 11:42:09.028121   15395 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:09.031383   15395 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-239000" ...
	I1030 11:42:09.039311   15395 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:09.039353   15395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:3f:bf:0b:1e:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:42:09.041470   15395 main.go:141] libmachine: STDOUT: 
	I1030 11:42:09.041483   15395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:09.041509   15395 fix.go:56] duration metric: took 13.502917ms for fixHost
	I1030 11:42:09.041513   15395 start.go:83] releasing machines lock for "old-k8s-version-239000", held for 13.517333ms
	W1030 11:42:09.041518   15395 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:09.041558   15395 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:09.041561   15395 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:14.043635   15395 start.go:360] acquireMachinesLock for old-k8s-version-239000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:14.043771   15395 start.go:364] duration metric: took 107.75µs to acquireMachinesLock for "old-k8s-version-239000"
	I1030 11:42:14.043796   15395 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:14.043800   15395 fix.go:54] fixHost starting: 
	I1030 11:42:14.043983   15395 fix.go:112] recreateIfNeeded on old-k8s-version-239000: state=Stopped err=<nil>
	W1030 11:42:14.043989   15395 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:14.053272   15395 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-239000" ...
	I1030 11:42:14.057133   15395 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:14.057234   15395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:3f:bf:0b:1e:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/old-k8s-version-239000/disk.qcow2
	I1030 11:42:14.059788   15395 main.go:141] libmachine: STDOUT: 
	I1030 11:42:14.059806   15395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:14.059825   15395 fix.go:56] duration metric: took 16.025292ms for fixHost
	I1030 11:42:14.059829   15395 start.go:83] releasing machines lock for "old-k8s-version-239000", held for 16.049833ms
	W1030 11:42:14.059904   15395 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:14.067987   15395 out.go:201] 
	W1030 11:42:14.071151   15395 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:14.071156   15395 out.go:270] * 
	* 
	W1030 11:42:14.071725   15395 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:14.082160   15395 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-239000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (37.116042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-239000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (33.332291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-239000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.415584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.218125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-239000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.876875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-239000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-239000 --alsologtostderr -v=1: exit status 83 (46.642917ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-239000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:14.336357   15414 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:14.336760   15414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:14.336767   15414 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:14.336769   15414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:14.336943   15414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:14.337143   15414 out.go:352] Setting JSON to false
	I1030 11:42:14.337150   15414 mustload.go:65] Loading cluster: old-k8s-version-239000
	I1030 11:42:14.337376   15414 config.go:182] Loaded profile config "old-k8s-version-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1030 11:42:14.341894   15414 out.go:177] * The control-plane node old-k8s-version-239000 host is not running: state=Stopped
	I1030 11:42:14.345014   15414 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-239000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-239000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.460792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (33.725875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.848589541s)

                                                
                                                
-- stdout --
	* [no-preload-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:14.671469   15431 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:14.671610   15431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:14.671615   15431 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:14.671617   15431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:14.671756   15431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:14.672941   15431 out.go:352] Setting JSON to false
	I1030 11:42:14.690762   15431 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7905,"bootTime":1730305829,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:14.690847   15431 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:14.696037   15431 out.go:177] * [no-preload-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:14.702097   15431 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:14.702151   15431 notify.go:220] Checking for updates...
	I1030 11:42:14.709876   15431 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:14.713033   15431 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:14.716027   15431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:14.719083   15431 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:14.722049   15431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:14.725418   15431 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:14.725477   15431 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:42:14.725522   15431 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:14.730031   15431 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:42:14.737050   15431 start.go:297] selected driver: qemu2
	I1030 11:42:14.737059   15431 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:42:14.737068   15431 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:14.739569   15431 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:42:14.744028   15431 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:42:14.747176   15431 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:14.747200   15431 cni.go:84] Creating CNI manager for ""
	I1030 11:42:14.747230   15431 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:42:14.747235   15431 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:42:14.747261   15431 start.go:340] cluster config:
	{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:14.751685   15431 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.760027   15431 out.go:177] * Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	I1030 11:42:14.764059   15431 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:42:14.764115   15431 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/no-preload-143000/config.json ...
	I1030 11:42:14.764130   15431 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/no-preload-143000/config.json: {Name:mkd77dc7ba401dd435eddae24c2dc86f6674a2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:42:14.764141   15431 cache.go:107] acquiring lock: {Name:mka69c19de02f0de155a3ee65c19cab0fdf62d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764142   15431 cache.go:107] acquiring lock: {Name:mkc4effea16856a69ffd2bc1a06c7ae09e7e81de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764146   15431 cache.go:107] acquiring lock: {Name:mk6f1e0ca0bc37cd3a2828304238f0f9686534f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764165   15431 cache.go:107] acquiring lock: {Name:mk0598828950cfe51ec7033bfaa15ae9a81fb5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764167   15431 cache.go:107] acquiring lock: {Name:mk5f94c3180464e9ff79a288ed8c2c87aeb72d2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764318   15431 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 11:42:14.764343   15431 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 11:42:14.764364   15431 cache.go:107] acquiring lock: {Name:mkf32f50f95e27d27e60335f194df5d47b747313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764386   15431 cache.go:107] acquiring lock: {Name:mkd1fcbdb49bab16e089dc95e0b6f37ca38682ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764344   15431 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1030 11:42:14.764438   15431 cache.go:107] acquiring lock: {Name:mk66b6a8abdaeace67b2bef522e7ed39ffbd4d08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:14.764462   15431 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 11:42:14.764555   15431 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1030 11:42:14.764575   15431 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 436.167µs
	I1030 11:42:14.764585   15431 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 11:42:14.764594   15431 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1030 11:42:14.764594   15431 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 11:42:14.764602   15431 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1030 11:42:14.764420   15431 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:14.764669   15431 start.go:364] duration metric: took 40.583µs to acquireMachinesLock for "no-preload-143000"
	I1030 11:42:14.764682   15431 start.go:93] Provisioning new machine with config: &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:14.764723   15431 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:14.773020   15431 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:14.777199   15431 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1030 11:42:14.777578   15431 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1030 11:42:14.777970   15431 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1030 11:42:14.777995   15431 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1030 11:42:14.778104   15431 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1030 11:42:14.778400   15431 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1030 11:42:14.779912   15431 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1030 11:42:14.788781   15431 start.go:159] libmachine.API.Create for "no-preload-143000" (driver="qemu2")
	I1030 11:42:14.788806   15431 client.go:168] LocalClient.Create starting
	I1030 11:42:14.788890   15431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:14.788927   15431 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:14.788938   15431 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:14.788989   15431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:14.789018   15431 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:14.789024   15431 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:14.789363   15431 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:14.958586   15431 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:15.094309   15431 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:15.094336   15431 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:15.094556   15431 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:15.105323   15431 main.go:141] libmachine: STDOUT: 
	I1030 11:42:15.105350   15431 main.go:141] libmachine: STDERR: 
	I1030 11:42:15.105412   15431 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2 +20000M
	I1030 11:42:15.114266   15431 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:15.114295   15431 main.go:141] libmachine: STDERR: 
	I1030 11:42:15.114308   15431 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:15.114313   15431 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:15.114326   15431 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:15.114350   15431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:af:2d:8d:9b:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:15.116138   15431 main.go:141] libmachine: STDOUT: 
	I1030 11:42:15.116154   15431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:15.116175   15431 client.go:171] duration metric: took 327.366833ms to LocalClient.Create
	I1030 11:42:15.174283   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1030 11:42:15.236548   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1030 11:42:15.303124   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1030 11:42:15.381884   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1030 11:42:15.421870   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1030 11:42:15.431955   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1030 11:42:15.540996   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1030 11:42:15.541012   15431 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 776.727083ms
	I1030 11:42:15.541023   15431 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1030 11:42:15.554557   15431 cache.go:162] opening:  /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1030 11:42:17.116291   15431 start.go:128] duration metric: took 2.351579584s to createHost
	I1030 11:42:17.116317   15431 start.go:83] releasing machines lock for "no-preload-143000", held for 2.351670708s
	W1030 11:42:17.116348   15431 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:17.132020   15431 out.go:177] * Deleting "no-preload-143000" in qemu2 ...
	W1030 11:42:17.147475   15431 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:17.147491   15431 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:18.410773   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1030 11:42:18.410807   15431 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 3.6467145s
	I1030 11:42:18.410822   15431 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1030 11:42:18.427310   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1030 11:42:18.427331   15431 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 3.663065709s
	I1030 11:42:18.427343   15431 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1030 11:42:19.068447   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1030 11:42:19.068464   15431 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.304347166s
	I1030 11:42:19.068471   15431 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1030 11:42:20.064594   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1030 11:42:20.064610   15431 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.300530125s
	I1030 11:42:20.064618   15431 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1030 11:42:20.197154   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1030 11:42:20.197169   15431 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 5.4328515s
	I1030 11:42:20.197175   15431 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1030 11:42:21.973583   15431 cache.go:157] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1030 11:42:21.973617   15431 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.209534042s
	I1030 11:42:21.973633   15431 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1030 11:42:21.973654   15431 cache.go:87] Successfully saved all images to host disk.
	I1030 11:42:22.149634   15431 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:22.150215   15431 start.go:364] duration metric: took 491.5µs to acquireMachinesLock for "no-preload-143000"
	I1030 11:42:22.150362   15431 start.go:93] Provisioning new machine with config: &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:22.150575   15431 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:22.161135   15431 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:22.208266   15431 start.go:159] libmachine.API.Create for "no-preload-143000" (driver="qemu2")
	I1030 11:42:22.208330   15431 client.go:168] LocalClient.Create starting
	I1030 11:42:22.208505   15431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:22.208605   15431 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:22.208630   15431 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:22.208709   15431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:22.208767   15431 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:22.208782   15431 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:22.209452   15431 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:22.386541   15431 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:22.420118   15431 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:22.420125   15431 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:22.420311   15431 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:22.430832   15431 main.go:141] libmachine: STDOUT: 
	I1030 11:42:22.430857   15431 main.go:141] libmachine: STDERR: 
	I1030 11:42:22.430922   15431 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2 +20000M
	I1030 11:42:22.439574   15431 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:22.439590   15431 main.go:141] libmachine: STDERR: 
	I1030 11:42:22.439602   15431 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:22.439608   15431 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:22.439616   15431 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:22.439666   15431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d4:5a:6a:9f:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:22.441563   15431 main.go:141] libmachine: STDOUT: 
	I1030 11:42:22.441578   15431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:22.441596   15431 client.go:171] duration metric: took 233.256917ms to LocalClient.Create
	I1030 11:42:24.443777   15431 start.go:128] duration metric: took 2.2931885s to createHost
	I1030 11:42:24.443848   15431 start.go:83] releasing machines lock for "no-preload-143000", held for 2.293634667s
	W1030 11:42:24.444301   15431 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:24.453890   15431 out.go:201] 
	W1030 11:42:24.461040   15431 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:24.461092   15431 out.go:270] * 
	* 
	W1030 11:42:24.463911   15431 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:24.473979   15431 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (71.652958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-143000 create -f testdata/busybox.yaml: exit status 1 (30.186209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-143000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (34.675166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (33.496375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-143000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system: exit status 1 (27.564083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-143000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (34.14125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.194915042s)

                                                
                                                
-- stdout --
	* [no-preload-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	* Restarting existing qemu2 VM for "no-preload-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:27.107680   15501 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:27.107853   15501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:27.107857   15501 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:27.107859   15501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:27.107989   15501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:27.109158   15501 out.go:352] Setting JSON to false
	I1030 11:42:27.128003   15501 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7918,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:27.128095   15501 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:27.133336   15501 out.go:177] * [no-preload-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:27.141367   15501 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:27.141426   15501 notify.go:220] Checking for updates...
	I1030 11:42:27.149211   15501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:27.152384   15501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:27.155392   15501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:27.158361   15501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:27.161392   15501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:27.165793   15501 config.go:182] Loaded profile config "no-preload-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:27.166060   15501 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:27.170201   15501 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:42:27.177325   15501 start.go:297] selected driver: qemu2
	I1030 11:42:27.177330   15501 start.go:901] validating driver "qemu2" against &{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:27.177373   15501 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:27.179825   15501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:27.179851   15501 cni.go:84] Creating CNI manager for ""
	I1030 11:42:27.179876   15501 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:42:27.179901   15501 start.go:340] cluster config:
	{Name:no-preload-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-143000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:27.184128   15501 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.192363   15501 out.go:177] * Starting "no-preload-143000" primary control-plane node in "no-preload-143000" cluster
	I1030 11:42:27.196392   15501 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:42:27.196464   15501 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/no-preload-143000/config.json ...
	I1030 11:42:27.196482   15501 cache.go:107] acquiring lock: {Name:mka69c19de02f0de155a3ee65c19cab0fdf62d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196509   15501 cache.go:107] acquiring lock: {Name:mk0598828950cfe51ec7033bfaa15ae9a81fb5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196583   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1030 11:42:27.196593   15501 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.334µs
	I1030 11:42:27.196597   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1030 11:42:27.196599   15501 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1030 11:42:27.196604   15501 cache.go:107] acquiring lock: {Name:mk5f94c3180464e9ff79a288ed8c2c87aeb72d2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196602   15501 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 99.291µs
	I1030 11:42:27.196613   15501 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1030 11:42:27.196484   15501 cache.go:107] acquiring lock: {Name:mkc4effea16856a69ffd2bc1a06c7ae09e7e81de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196592   15501 cache.go:107] acquiring lock: {Name:mk6f1e0ca0bc37cd3a2828304238f0f9686534f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196610   15501 cache.go:107] acquiring lock: {Name:mk66b6a8abdaeace67b2bef522e7ed39ffbd4d08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196665   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1030 11:42:27.196664   15501 cache.go:107] acquiring lock: {Name:mkd1fcbdb49bab16e089dc95e0b6f37ca38682ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196668   15501 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 64.583µs
	I1030 11:42:27.196671   15501 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1030 11:42:27.196693   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1030 11:42:27.196693   15501 cache.go:107] acquiring lock: {Name:mkf32f50f95e27d27e60335f194df5d47b747313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:27.196697   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1030 11:42:27.196696   15501 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 143.666µs
	I1030 11:42:27.196734   15501 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 245.667µs
	I1030 11:42:27.196741   15501 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1030 11:42:27.196713   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1030 11:42:27.196765   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1030 11:42:27.196733   15501 cache.go:115] /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1030 11:42:27.196771   15501 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 157.792µs
	I1030 11:42:27.196788   15501 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1030 11:42:27.196797   15501 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 208.958µs
	I1030 11:42:27.196802   15501 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1030 11:42:27.196798   15501 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 138.125µs
	I1030 11:42:27.196809   15501 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1030 11:42:27.196750   15501 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1030 11:42:27.196824   15501 cache.go:87] Successfully saved all images to host disk.
	I1030 11:42:27.196896   15501 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:27.196927   15501 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "no-preload-143000"
	I1030 11:42:27.196935   15501 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:27.196939   15501 fix.go:54] fixHost starting: 
	I1030 11:42:27.197046   15501 fix.go:112] recreateIfNeeded on no-preload-143000: state=Stopped err=<nil>
	W1030 11:42:27.197053   15501 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:27.205364   15501 out.go:177] * Restarting existing qemu2 VM for "no-preload-143000" ...
	I1030 11:42:27.209304   15501 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:27.209344   15501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d4:5a:6a:9f:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:27.211538   15501 main.go:141] libmachine: STDOUT: 
	I1030 11:42:27.211557   15501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:27.211584   15501 fix.go:56] duration metric: took 14.64375ms for fixHost
	I1030 11:42:27.211588   15501 start.go:83] releasing machines lock for "no-preload-143000", held for 14.65725ms
	W1030 11:42:27.211593   15501 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:27.211619   15501 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:27.211623   15501 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:32.213833   15501 start.go:360] acquireMachinesLock for no-preload-143000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:32.214330   15501 start.go:364] duration metric: took 384.25µs to acquireMachinesLock for "no-preload-143000"
	I1030 11:42:32.214483   15501 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:32.214504   15501 fix.go:54] fixHost starting: 
	I1030 11:42:32.215236   15501 fix.go:112] recreateIfNeeded on no-preload-143000: state=Stopped err=<nil>
	W1030 11:42:32.215264   15501 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:32.219972   15501 out.go:177] * Restarting existing qemu2 VM for "no-preload-143000" ...
	I1030 11:42:32.226794   15501 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:32.227034   15501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d4:5a:6a:9f:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/no-preload-143000/disk.qcow2
	I1030 11:42:32.237942   15501 main.go:141] libmachine: STDOUT: 
	I1030 11:42:32.237993   15501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:32.238082   15501 fix.go:56] duration metric: took 23.579584ms for fixHost
	I1030 11:42:32.238098   15501 start.go:83] releasing machines lock for "no-preload-143000", held for 23.744542ms
	W1030 11:42:32.238264   15501 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:32.242968   15501 out.go:201] 
	W1030 11:42:32.245838   15501 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:32.245869   15501 out.go:270] * 
	* 
	W1030 11:42:32.248477   15501 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:32.260723   15501 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-143000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (65.601708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-143000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (35.160625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-143000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.54325ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-143000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-143000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (33.681917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-143000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (33.777583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1: exit status 83 (42.901708ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-143000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:32.546443   15523 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:32.546660   15523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:32.546663   15523 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:32.546666   15523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:32.546798   15523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:32.547045   15523 out.go:352] Setting JSON to false
	I1030 11:42:32.547053   15523 mustload.go:65] Loading cluster: no-preload-143000
	I1030 11:42:32.547276   15523 config.go:182] Loaded profile config "no-preload-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:32.551981   15523 out.go:177] * The control-plane node no-preload-143000 host is not running: state=Stopped
	I1030 11:42:32.553123   15523 out.go:177]   To start a cluster, run: "minikube start -p no-preload-143000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-143000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (33.361417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (33.900209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-143000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.984509417s)

                                                
                                                
-- stdout --
	* [embed-certs-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-717000" primary control-plane node in "embed-certs-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:32.889190   15540 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:32.889326   15540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:32.889329   15540 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:32.889332   15540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:32.889455   15540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:32.890713   15540 out.go:352] Setting JSON to false
	I1030 11:42:32.908793   15540 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7923,"bootTime":1730305829,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:32.908870   15540 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:32.915458   15540 out.go:177] * [embed-certs-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:32.928550   15540 notify.go:220] Checking for updates...
	I1030 11:42:32.934475   15540 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:32.944437   15540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:32.947392   15540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:32.951441   15540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:32.955391   15540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:32.959424   15540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:32.962861   15540 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:32.962919   15540 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:42:32.962975   15540 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:32.968211   15540 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:42:32.975466   15540 start.go:297] selected driver: qemu2
	I1030 11:42:32.975472   15540 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:42:32.975478   15540 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:32.978010   15540 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:42:32.982444   15540 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:42:32.986546   15540 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:32.986570   15540 cni.go:84] Creating CNI manager for ""
	I1030 11:42:32.986606   15540 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:42:32.986626   15540 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:42:32.986657   15540 start.go:340] cluster config:
	{Name:embed-certs-717000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:32.991405   15540 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:32.999436   15540 out.go:177] * Starting "embed-certs-717000" primary control-plane node in "embed-certs-717000" cluster
	I1030 11:42:33.003419   15540 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:42:33.003438   15540 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:42:33.003447   15540 cache.go:56] Caching tarball of preloaded images
	I1030 11:42:33.003522   15540 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:42:33.003528   15540 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:42:33.003601   15540 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/embed-certs-717000/config.json ...
	I1030 11:42:33.003612   15540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/embed-certs-717000/config.json: {Name:mk2b252f23a3c0ec70fd962e379ba7487223b787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:42:33.003926   15540 start.go:360] acquireMachinesLock for embed-certs-717000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:33.003978   15540 start.go:364] duration metric: took 41.541µs to acquireMachinesLock for "embed-certs-717000"
	I1030 11:42:33.003992   15540 start.go:93] Provisioning new machine with config: &{Name:embed-certs-717000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:33.004022   15540 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:33.008449   15540 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:33.023605   15540 start.go:159] libmachine.API.Create for "embed-certs-717000" (driver="qemu2")
	I1030 11:42:33.023639   15540 client.go:168] LocalClient.Create starting
	I1030 11:42:33.023715   15540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:33.023758   15540 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:33.023769   15540 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:33.023818   15540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:33.023848   15540 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:33.023858   15540 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:33.024308   15540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:33.189787   15540 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:33.342199   15540 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:33.342207   15540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:33.342419   15540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:33.352720   15540 main.go:141] libmachine: STDOUT: 
	I1030 11:42:33.352749   15540 main.go:141] libmachine: STDERR: 
	I1030 11:42:33.352815   15540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2 +20000M
	I1030 11:42:33.361364   15540 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:33.361379   15540 main.go:141] libmachine: STDERR: 
	I1030 11:42:33.361393   15540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:33.361398   15540 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:33.361415   15540 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:33.361448   15540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:5f:f0:d1:53:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:33.363336   15540 main.go:141] libmachine: STDOUT: 
	I1030 11:42:33.363348   15540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:33.363368   15540 client.go:171] duration metric: took 339.728833ms to LocalClient.Create
	I1030 11:42:35.365573   15540 start.go:128] duration metric: took 2.36154825s to createHost
	I1030 11:42:35.365663   15540 start.go:83] releasing machines lock for "embed-certs-717000", held for 2.361703s
	W1030 11:42:35.365714   15540 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:35.380950   15540 out.go:177] * Deleting "embed-certs-717000" in qemu2 ...
	W1030 11:42:35.406259   15540 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:35.406288   15540 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:40.408414   15540 start.go:360] acquireMachinesLock for embed-certs-717000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:40.408720   15540 start.go:364] duration metric: took 252.709µs to acquireMachinesLock for "embed-certs-717000"
	I1030 11:42:40.408756   15540 start.go:93] Provisioning new machine with config: &{Name:embed-certs-717000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:40.408900   15540 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:40.419255   15540 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:40.445350   15540 start.go:159] libmachine.API.Create for "embed-certs-717000" (driver="qemu2")
	I1030 11:42:40.445389   15540 client.go:168] LocalClient.Create starting
	I1030 11:42:40.445478   15540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:40.445532   15540 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:40.445545   15540 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:40.445591   15540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:40.445630   15540 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:40.445640   15540 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:40.446054   15540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:40.614704   15540 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:40.778806   15540 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:40.778818   15540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:40.779034   15540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:40.789350   15540 main.go:141] libmachine: STDOUT: 
	I1030 11:42:40.789370   15540 main.go:141] libmachine: STDERR: 
	I1030 11:42:40.789443   15540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2 +20000M
	I1030 11:42:40.799134   15540 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:40.799162   15540 main.go:141] libmachine: STDERR: 
	I1030 11:42:40.799176   15540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:40.799183   15540 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:40.799191   15540 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:40.799226   15540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7b:f7:93:85:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:40.801302   15540 main.go:141] libmachine: STDOUT: 
	I1030 11:42:40.801315   15540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:40.801329   15540 client.go:171] duration metric: took 355.938791ms to LocalClient.Create
	I1030 11:42:42.803463   15540 start.go:128] duration metric: took 2.394567208s to createHost
	I1030 11:42:42.803547   15540 start.go:83] releasing machines lock for "embed-certs-717000", held for 2.394838334s
	W1030 11:42:42.803857   15540 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:42.812471   15540 out.go:201] 
	W1030 11:42:42.817529   15540 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:42.817589   15540 out.go:270] * 
	* 
	W1030 11:42:42.818786   15540 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:42.828441   15540 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (58.008583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-717000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-717000 create -f testdata/busybox.yaml: exit status 1 (28.860792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-717000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (34.500125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (33.547916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-717000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-717000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-717000 describe deploy/metrics-server -n kube-system: exit status 1 (27.9815ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-717000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (34.20725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.192033958s)

                                                
                                                
-- stdout --
	* [embed-certs-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-717000" primary control-plane node in "embed-certs-717000" cluster
	* Restarting existing qemu2 VM for "embed-certs-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:47.127780   15592 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:47.127914   15592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:47.127918   15592 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:47.127921   15592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:47.128042   15592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:47.129141   15592 out.go:352] Setting JSON to false
	I1030 11:42:47.147325   15592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7938,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:47.147405   15592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:47.151885   15592 out.go:177] * [embed-certs-717000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:47.159865   15592 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:47.159876   15592 notify.go:220] Checking for updates...
	I1030 11:42:47.166958   15592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:47.169890   15592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:47.172899   15592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:47.176038   15592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:47.178888   15592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:47.182258   15592 config.go:182] Loaded profile config "embed-certs-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:47.182513   15592 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:47.185924   15592 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:42:47.192915   15592 start.go:297] selected driver: qemu2
	I1030 11:42:47.192922   15592 start.go:901] validating driver "qemu2" against &{Name:embed-certs-717000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:47.192985   15592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:47.195539   15592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:47.195567   15592 cni.go:84] Creating CNI manager for ""
	I1030 11:42:47.195590   15592 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:42:47.195609   15592 start.go:340] cluster config:
	{Name:embed-certs-717000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-717000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:47.200072   15592 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:47.207846   15592 out.go:177] * Starting "embed-certs-717000" primary control-plane node in "embed-certs-717000" cluster
	I1030 11:42:47.211895   15592 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:42:47.211911   15592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:42:47.211925   15592 cache.go:56] Caching tarball of preloaded images
	I1030 11:42:47.211995   15592 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:42:47.212000   15592 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:42:47.212061   15592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/embed-certs-717000/config.json ...
	I1030 11:42:47.212542   15592 start.go:360] acquireMachinesLock for embed-certs-717000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:47.212570   15592 start.go:364] duration metric: took 22.958µs to acquireMachinesLock for "embed-certs-717000"
	I1030 11:42:47.212578   15592 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:47.212583   15592 fix.go:54] fixHost starting: 
	I1030 11:42:47.212697   15592 fix.go:112] recreateIfNeeded on embed-certs-717000: state=Stopped err=<nil>
	W1030 11:42:47.212704   15592 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:47.220901   15592 out.go:177] * Restarting existing qemu2 VM for "embed-certs-717000" ...
	I1030 11:42:47.224848   15592 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:47.224880   15592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7b:f7:93:85:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:47.226961   15592 main.go:141] libmachine: STDOUT: 
	I1030 11:42:47.226979   15592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:47.227004   15592 fix.go:56] duration metric: took 14.420333ms for fixHost
	I1030 11:42:47.227009   15592 start.go:83] releasing machines lock for "embed-certs-717000", held for 14.434667ms
	W1030 11:42:47.227014   15592 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:47.227057   15592 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:47.227062   15592 start.go:729] Will try again in 5 seconds ...
	I1030 11:42:52.229133   15592 start.go:360] acquireMachinesLock for embed-certs-717000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:52.229613   15592 start.go:364] duration metric: took 390.75µs to acquireMachinesLock for "embed-certs-717000"
	I1030 11:42:52.229696   15592 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:42:52.229716   15592 fix.go:54] fixHost starting: 
	I1030 11:42:52.230463   15592 fix.go:112] recreateIfNeeded on embed-certs-717000: state=Stopped err=<nil>
	W1030 11:42:52.230492   15592 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:42:52.237991   15592 out.go:177] * Restarting existing qemu2 VM for "embed-certs-717000" ...
	I1030 11:42:52.242009   15592 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:52.242219   15592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7b:f7:93:85:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/embed-certs-717000/disk.qcow2
	I1030 11:42:52.252897   15592 main.go:141] libmachine: STDOUT: 
	I1030 11:42:52.252975   15592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:52.253140   15592 fix.go:56] duration metric: took 23.426666ms for fixHost
	I1030 11:42:52.253155   15592 start.go:83] releasing machines lock for "embed-certs-717000", held for 23.520917ms
	W1030 11:42:52.253315   15592 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:52.261990   15592 out.go:201] 
	W1030 11:42:52.265120   15592 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:42:52.265143   15592 out.go:270] * 
	* 
	W1030 11:42:52.266874   15592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:42:52.275980   15592 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-717000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (70.270417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-717000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (36.629375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-717000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.748708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-717000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-717000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (33.129667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-717000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (34.737333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-717000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-717000 --alsologtostderr -v=1: exit status 83 (44.241208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-717000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-717000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:52.569250   15615 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:52.569489   15615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:52.569492   15615 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:52.569495   15615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:52.569648   15615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:52.569867   15615 out.go:352] Setting JSON to false
	I1030 11:42:52.569875   15615 mustload.go:65] Loading cluster: embed-certs-717000
	I1030 11:42:52.570096   15615 config.go:182] Loaded profile config "embed-certs-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:52.574203   15615 out.go:177] * The control-plane node embed-certs-717000 host is not running: state=Stopped
	I1030 11:42:52.577253   15615 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-717000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-717000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (33.503459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (34.265667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.844384833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-194000" primary control-plane node in "default-k8s-diff-port-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:42:53.027007   15639 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:42:53.027195   15639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:53.027199   15639 out.go:358] Setting ErrFile to fd 2...
	I1030 11:42:53.027202   15639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:42:53.027337   15639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:42:53.028494   15639 out.go:352] Setting JSON to false
	I1030 11:42:53.047310   15639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7944,"bootTime":1730305829,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:42:53.047438   15639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:42:53.052253   15639 out.go:177] * [default-k8s-diff-port-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:42:53.058330   15639 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:42:53.058468   15639 notify.go:220] Checking for updates...
	I1030 11:42:53.066219   15639 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:42:53.070130   15639 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:42:53.073284   15639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:42:53.076237   15639 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:42:53.079324   15639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:42:53.082619   15639 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:42:53.082679   15639 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:42:53.082728   15639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:42:53.087306   15639 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:42:53.094215   15639 start.go:297] selected driver: qemu2
	I1030 11:42:53.094221   15639 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:42:53.094227   15639 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:42:53.096805   15639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:42:53.099183   15639 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:42:53.102339   15639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:42:53.102358   15639 cni.go:84] Creating CNI manager for ""
	I1030 11:42:53.102379   15639 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:42:53.102383   15639 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:42:53.102414   15639 start.go:340] cluster config:
	{Name:default-k8s-diff-port-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:42:53.106722   15639 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:42:53.115283   15639 out.go:177] * Starting "default-k8s-diff-port-194000" primary control-plane node in "default-k8s-diff-port-194000" cluster
	I1030 11:42:53.121160   15639 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:42:53.121184   15639 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:42:53.121193   15639 cache.go:56] Caching tarball of preloaded images
	I1030 11:42:53.121283   15639 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:42:53.121289   15639 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:42:53.121343   15639 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/default-k8s-diff-port-194000/config.json ...
	I1030 11:42:53.121354   15639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/default-k8s-diff-port-194000/config.json: {Name:mk3fe94b6754cff1e20c5f7ca3301ba682b63be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:42:53.121587   15639 start.go:360] acquireMachinesLock for default-k8s-diff-port-194000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:42:53.121633   15639 start.go:364] duration metric: took 37.375µs to acquireMachinesLock for "default-k8s-diff-port-194000"
	I1030 11:42:53.121644   15639 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:42:53.121686   15639 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:42:53.129213   15639 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:42:53.144124   15639 start.go:159] libmachine.API.Create for "default-k8s-diff-port-194000" (driver="qemu2")
	I1030 11:42:53.144158   15639 client.go:168] LocalClient.Create starting
	I1030 11:42:53.144236   15639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:42:53.144277   15639 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:53.144287   15639 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:53.144333   15639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:42:53.144361   15639 main.go:141] libmachine: Decoding PEM data...
	I1030 11:42:53.144369   15639 main.go:141] libmachine: Parsing certificate...
	I1030 11:42:53.144831   15639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:42:53.309703   15639 main.go:141] libmachine: Creating SSH key...
	I1030 11:42:53.390730   15639 main.go:141] libmachine: Creating Disk image...
	I1030 11:42:53.390739   15639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:42:53.390956   15639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:42:53.401258   15639 main.go:141] libmachine: STDOUT: 
	I1030 11:42:53.401279   15639 main.go:141] libmachine: STDERR: 
	I1030 11:42:53.401336   15639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2 +20000M
	I1030 11:42:53.410458   15639 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:42:53.410473   15639 main.go:141] libmachine: STDERR: 
	I1030 11:42:53.410501   15639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:42:53.410507   15639 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:42:53.410519   15639 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:42:53.410552   15639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:1c:1d:1a:84:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:42:53.412521   15639 main.go:141] libmachine: STDOUT: 
	I1030 11:42:53.412539   15639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:42:53.412562   15639 client.go:171] duration metric: took 268.399916ms to LocalClient.Create
	I1030 11:42:55.414791   15639 start.go:128] duration metric: took 2.293101666s to createHost
	I1030 11:42:55.414859   15639 start.go:83] releasing machines lock for "default-k8s-diff-port-194000", held for 2.293244375s
	W1030 11:42:55.414921   15639 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:55.431442   15639 out.go:177] * Deleting "default-k8s-diff-port-194000" in qemu2 ...
	W1030 11:42:55.458126   15639 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:42:55.458156   15639 start.go:729] Will try again in 5 seconds ...
	I1030 11:43:00.460404   15639 start.go:360] acquireMachinesLock for default-k8s-diff-port-194000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:00.461027   15639 start.go:364] duration metric: took 523.834µs to acquireMachinesLock for "default-k8s-diff-port-194000"
	I1030 11:43:00.461103   15639 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:43:00.461352   15639 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:43:00.469613   15639 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:43:00.520003   15639 start.go:159] libmachine.API.Create for "default-k8s-diff-port-194000" (driver="qemu2")
	I1030 11:43:00.520072   15639 client.go:168] LocalClient.Create starting
	I1030 11:43:00.520225   15639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:43:00.520322   15639 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:00.520338   15639 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:00.520434   15639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:43:00.520498   15639 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:00.520509   15639 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:00.521194   15639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:43:00.698251   15639 main.go:141] libmachine: Creating SSH key...
	I1030 11:43:00.778267   15639 main.go:141] libmachine: Creating Disk image...
	I1030 11:43:00.778275   15639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:43:00.778469   15639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:43:00.789242   15639 main.go:141] libmachine: STDOUT: 
	I1030 11:43:00.789279   15639 main.go:141] libmachine: STDERR: 
	I1030 11:43:00.789364   15639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2 +20000M
	I1030 11:43:00.799283   15639 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:43:00.799310   15639 main.go:141] libmachine: STDERR: 
	I1030 11:43:00.799322   15639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:43:00.799328   15639 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:43:00.799334   15639 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:00.799381   15639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b9:f5:46:1a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:43:00.801587   15639 main.go:141] libmachine: STDOUT: 
	I1030 11:43:00.801601   15639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:00.801613   15639 client.go:171] duration metric: took 281.536291ms to LocalClient.Create
	I1030 11:43:02.803775   15639 start.go:128] duration metric: took 2.342407667s to createHost
	I1030 11:43:02.803844   15639 start.go:83] releasing machines lock for "default-k8s-diff-port-194000", held for 2.342821833s
	W1030 11:43:02.804280   15639 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:02.813881   15639 out.go:201] 
	W1030 11:43:02.817977   15639 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:02.818004   15639 out.go:270] * 
	* 
	W1030 11:43:02.819447   15639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:43:02.829886   15639 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (56.247834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-194000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-194000 create -f testdata/busybox.yaml: exit status 1 (28.1405ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-194000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-194000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (33.985584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (33.580459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-194000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-194000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-194000 describe deploy/metrics-server -n kube-system: exit status 1 (27.802ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-194000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-194000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (34.1575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.195080208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-194000" primary control-plane node in "default-k8s-diff-port-194000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-194000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-194000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:43:07.151443   15693 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:43:07.151606   15693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:07.151609   15693 out.go:358] Setting ErrFile to fd 2...
	I1030 11:43:07.151612   15693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:07.151740   15693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:43:07.153078   15693 out.go:352] Setting JSON to false
	I1030 11:43:07.173538   15693 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7958,"bootTime":1730305829,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:43:07.173632   15693 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:43:07.178347   15693 out.go:177] * [default-k8s-diff-port-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:43:07.185321   15693 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:43:07.185350   15693 notify.go:220] Checking for updates...
	I1030 11:43:07.192300   15693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:43:07.195313   15693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:43:07.198314   15693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:43:07.201309   15693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:43:07.204288   15693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:43:07.207625   15693 config.go:182] Loaded profile config "default-k8s-diff-port-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:43:07.207916   15693 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:43:07.211241   15693 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:43:07.220289   15693 start.go:297] selected driver: qemu2
	I1030 11:43:07.220304   15693 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:43:07.220363   15693 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:43:07.223281   15693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 11:43:07.223313   15693 cni.go:84] Creating CNI manager for ""
	I1030 11:43:07.223332   15693 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:43:07.223357   15693 start.go:340] cluster config:
	{Name:default-k8s-diff-port-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-194000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:43:07.227941   15693 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:43:07.236263   15693 out.go:177] * Starting "default-k8s-diff-port-194000" primary control-plane node in "default-k8s-diff-port-194000" cluster
	I1030 11:43:07.239324   15693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:43:07.239342   15693 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:43:07.239350   15693 cache.go:56] Caching tarball of preloaded images
	I1030 11:43:07.239431   15693 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:43:07.239436   15693 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:43:07.239495   15693 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/default-k8s-diff-port-194000/config.json ...
	I1030 11:43:07.239855   15693 start.go:360] acquireMachinesLock for default-k8s-diff-port-194000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:07.239890   15693 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "default-k8s-diff-port-194000"
	I1030 11:43:07.239899   15693 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:43:07.239902   15693 fix.go:54] fixHost starting: 
	I1030 11:43:07.240010   15693 fix.go:112] recreateIfNeeded on default-k8s-diff-port-194000: state=Stopped err=<nil>
	W1030 11:43:07.240017   15693 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:43:07.244315   15693 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-194000" ...
	I1030 11:43:07.259960   15693 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:07.260007   15693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b9:f5:46:1a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:43:07.262383   15693 main.go:141] libmachine: STDOUT: 
	I1030 11:43:07.262402   15693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:07.262433   15693 fix.go:56] duration metric: took 22.527583ms for fixHost
	I1030 11:43:07.262438   15693 start.go:83] releasing machines lock for "default-k8s-diff-port-194000", held for 22.543291ms
	W1030 11:43:07.262444   15693 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:07.262493   15693 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:07.262497   15693 start.go:729] Will try again in 5 seconds ...
	I1030 11:43:12.264709   15693 start.go:360] acquireMachinesLock for default-k8s-diff-port-194000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:12.265177   15693 start.go:364] duration metric: took 361.083µs to acquireMachinesLock for "default-k8s-diff-port-194000"
	I1030 11:43:12.265318   15693 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:43:12.265336   15693 fix.go:54] fixHost starting: 
	I1030 11:43:12.265962   15693 fix.go:112] recreateIfNeeded on default-k8s-diff-port-194000: state=Stopped err=<nil>
	W1030 11:43:12.265979   15693 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:43:12.269542   15693 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-194000" ...
	I1030 11:43:12.276417   15693 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:12.276548   15693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b9:f5:46:1a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/default-k8s-diff-port-194000/disk.qcow2
	I1030 11:43:12.285499   15693 main.go:141] libmachine: STDOUT: 
	I1030 11:43:12.285564   15693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:12.285655   15693 fix.go:56] duration metric: took 20.322583ms for fixHost
	I1030 11:43:12.285670   15693 start.go:83] releasing machines lock for "default-k8s-diff-port-194000", held for 20.471583ms
	W1030 11:43:12.285843   15693 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-194000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-194000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:12.290619   15693 out.go:201] 
	W1030 11:43:12.293432   15693 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:12.293448   15693 out.go:270] * 
	* 
	W1030 11:43:12.294895   15693 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:43:12.303506   15693 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-194000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (60.635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-194000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (34.738458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-194000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-194000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-194000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.615708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-194000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-194000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (34.0345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-194000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (34.333792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-194000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-194000 --alsologtostderr -v=1: exit status 83 (45.027541ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-194000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-194000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:43:12.580989   15713 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:43:12.581197   15713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:12.581200   15713 out.go:358] Setting ErrFile to fd 2...
	I1030 11:43:12.581202   15713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:12.581330   15713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:43:12.581546   15713 out.go:352] Setting JSON to false
	I1030 11:43:12.581554   15713 mustload.go:65] Loading cluster: default-k8s-diff-port-194000
	I1030 11:43:12.581764   15713 config.go:182] Loaded profile config "default-k8s-diff-port-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:43:12.586435   15713 out.go:177] * The control-plane node default-k8s-diff-port-194000 host is not running: state=Stopped
	I1030 11:43:12.589469   15713 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-194000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-194000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (34.230542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (33.985375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-194000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.833222584s)

                                                
                                                
-- stdout --
	* [newest-cni-018000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-018000" primary control-plane node in "newest-cni-018000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-018000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:43:12.926070   15730 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:43:12.926235   15730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:12.926239   15730 out.go:358] Setting ErrFile to fd 2...
	I1030 11:43:12.926241   15730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:12.926388   15730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:43:12.927625   15730 out.go:352] Setting JSON to false
	I1030 11:43:12.946128   15730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7963,"bootTime":1730305829,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:43:12.946197   15730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:43:12.951065   15730 out.go:177] * [newest-cni-018000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:43:12.959074   15730 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:43:12.959155   15730 notify.go:220] Checking for updates...
	I1030 11:43:12.965987   15730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:43:12.969010   15730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:43:12.973055   15730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:43:12.975971   15730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:43:12.979036   15730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:43:12.982390   15730 config.go:182] Loaded profile config "multinode-097000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:43:12.982446   15730 config.go:182] Loaded profile config "stopped-upgrade-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1030 11:43:12.982494   15730 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:43:12.986925   15730 out.go:177] * Using the qemu2 driver based on user configuration
	I1030 11:43:12.994065   15730 start.go:297] selected driver: qemu2
	I1030 11:43:12.994071   15730 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:43:12.994076   15730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:43:12.996625   15730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1030 11:43:12.996658   15730 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1030 11:43:13.004038   15730 out.go:177] * Automatically selected the socket_vmnet network
	I1030 11:43:13.007057   15730 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 11:43:13.007072   15730 cni.go:84] Creating CNI manager for ""
	I1030 11:43:13.007091   15730 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:43:13.007097   15730 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:43:13.007126   15730 start.go:340] cluster config:
	{Name:newest-cni-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:43:13.011427   15730 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:43:13.020008   15730 out.go:177] * Starting "newest-cni-018000" primary control-plane node in "newest-cni-018000" cluster
	I1030 11:43:13.024015   15730 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:43:13.024032   15730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:43:13.024039   15730 cache.go:56] Caching tarball of preloaded images
	I1030 11:43:13.024128   15730 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:43:13.024133   15730 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:43:13.024184   15730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/newest-cni-018000/config.json ...
	I1030 11:43:13.024200   15730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/newest-cni-018000/config.json: {Name:mkbc6ece5a7d62f7a0f477c5e45d187d7291a620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:43:13.024563   15730 start.go:360] acquireMachinesLock for newest-cni-018000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:13.024604   15730 start.go:364] duration metric: took 35.708µs to acquireMachinesLock for "newest-cni-018000"
	I1030 11:43:13.024615   15730 start.go:93] Provisioning new machine with config: &{Name:newest-cni-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:43:13.024641   15730 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:43:13.029039   15730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:43:13.043540   15730 start.go:159] libmachine.API.Create for "newest-cni-018000" (driver="qemu2")
	I1030 11:43:13.043572   15730 client.go:168] LocalClient.Create starting
	I1030 11:43:13.043642   15730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:43:13.043684   15730 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:13.043696   15730 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:13.043738   15730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:43:13.043776   15730 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:13.043782   15730 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:13.044166   15730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:43:13.210804   15730 main.go:141] libmachine: Creating SSH key...
	I1030 11:43:13.248506   15730 main.go:141] libmachine: Creating Disk image...
	I1030 11:43:13.248512   15730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:43:13.248697   15730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:13.259306   15730 main.go:141] libmachine: STDOUT: 
	I1030 11:43:13.259334   15730 main.go:141] libmachine: STDERR: 
	I1030 11:43:13.259396   15730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2 +20000M
	I1030 11:43:13.268462   15730 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:43:13.268480   15730 main.go:141] libmachine: STDERR: 
	I1030 11:43:13.268497   15730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:13.268502   15730 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:43:13.268517   15730 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:13.268546   15730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:fb:06:4b:41:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:13.270549   15730 main.go:141] libmachine: STDOUT: 
	I1030 11:43:13.270565   15730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:13.270587   15730 client.go:171] duration metric: took 227.01125ms to LocalClient.Create
	I1030 11:43:15.272338   15730 start.go:128] duration metric: took 2.247717209s to createHost
	I1030 11:43:15.272353   15730 start.go:83] releasing machines lock for "newest-cni-018000", held for 2.247771208s
	W1030 11:43:15.272363   15730 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:15.281023   15730 out.go:177] * Deleting "newest-cni-018000" in qemu2 ...
	W1030 11:43:15.294831   15730 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:15.294839   15730 start.go:729] Will try again in 5 seconds ...
	I1030 11:43:20.296970   15730 start.go:360] acquireMachinesLock for newest-cni-018000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:20.297726   15730 start.go:364] duration metric: took 637.25µs to acquireMachinesLock for "newest-cni-018000"
	I1030 11:43:20.297851   15730 start.go:93] Provisioning new machine with config: &{Name:newest-cni-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1030 11:43:20.298121   15730 start.go:125] createHost starting for "" (driver="qemu2")
	I1030 11:43:20.304765   15730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 11:43:20.350442   15730 start.go:159] libmachine.API.Create for "newest-cni-018000" (driver="qemu2")
	I1030 11:43:20.350506   15730 client.go:168] LocalClient.Create starting
	I1030 11:43:20.350642   15730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/ca.pem
	I1030 11:43:20.350745   15730 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:20.350764   15730 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:20.350832   15730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19883-11536/.minikube/certs/cert.pem
	I1030 11:43:20.350888   15730 main.go:141] libmachine: Decoding PEM data...
	I1030 11:43:20.350901   15730 main.go:141] libmachine: Parsing certificate...
	I1030 11:43:20.351622   15730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso...
	I1030 11:43:20.525256   15730 main.go:141] libmachine: Creating SSH key...
	I1030 11:43:20.663621   15730 main.go:141] libmachine: Creating Disk image...
	I1030 11:43:20.663632   15730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1030 11:43:20.663833   15730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:20.673920   15730 main.go:141] libmachine: STDOUT: 
	I1030 11:43:20.673937   15730 main.go:141] libmachine: STDERR: 
	I1030 11:43:20.673992   15730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2 +20000M
	I1030 11:43:20.682520   15730 main.go:141] libmachine: STDOUT: Image resized.
	
	I1030 11:43:20.682535   15730 main.go:141] libmachine: STDERR: 
	I1030 11:43:20.682545   15730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:20.682551   15730 main.go:141] libmachine: Starting QEMU VM...
	I1030 11:43:20.682560   15730 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:20.682589   15730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e6:db:2c:56:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:20.684396   15730 main.go:141] libmachine: STDOUT: 
	I1030 11:43:20.684412   15730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:20.684425   15730 client.go:171] duration metric: took 333.916833ms to LocalClient.Create
	I1030 11:43:22.686581   15730 start.go:128] duration metric: took 2.388453125s to createHost
	I1030 11:43:22.686646   15730 start.go:83] releasing machines lock for "newest-cni-018000", held for 2.388907125s
	W1030 11:43:22.687029   15730 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:22.694141   15730 out.go:201] 
	W1030 11:43:22.700241   15730 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:22.700279   15730 out.go:270] * 
	* 
	W1030 11:43:22.703153   15730 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:43:22.712082   15730 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (62.019625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.190688459s)

                                                
                                                
-- stdout --
	* [newest-cni-018000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-018000" primary control-plane node in "newest-cni-018000" cluster
	* Restarting existing qemu2 VM for "newest-cni-018000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-018000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:43:26.732630   15774 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:43:26.732776   15774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:26.732779   15774 out.go:358] Setting ErrFile to fd 2...
	I1030 11:43:26.732782   15774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:26.732914   15774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:43:26.734002   15774 out.go:352] Setting JSON to false
	I1030 11:43:26.753131   15774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7977,"bootTime":1730305829,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:43:26.753209   15774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:43:26.757766   15774 out.go:177] * [newest-cni-018000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:43:26.764780   15774 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:43:26.764860   15774 notify.go:220] Checking for updates...
	I1030 11:43:26.771648   15774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:43:26.774702   15774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:43:26.777710   15774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:43:26.780716   15774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:43:26.783658   15774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:43:26.787008   15774 config.go:182] Loaded profile config "newest-cni-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:43:26.787264   15774 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:43:26.790668   15774 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:43:26.797707   15774 start.go:297] selected driver: qemu2
	I1030 11:43:26.797714   15774 start.go:901] validating driver "qemu2" against &{Name:newest-cni-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:43:26.797757   15774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:43:26.800280   15774 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1030 11:43:26.800305   15774 cni.go:84] Creating CNI manager for ""
	I1030 11:43:26.800323   15774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:43:26.800346   15774 start.go:340] cluster config:
	{Name:newest-cni-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-018000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:43:26.804546   15774 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:43:26.812651   15774 out.go:177] * Starting "newest-cni-018000" primary control-plane node in "newest-cni-018000" cluster
	I1030 11:43:26.815696   15774 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:43:26.815709   15774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:43:26.815715   15774 cache.go:56] Caching tarball of preloaded images
	I1030 11:43:26.815783   15774 preload.go:172] Found /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1030 11:43:26.815788   15774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1030 11:43:26.815838   15774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/newest-cni-018000/config.json ...
	I1030 11:43:26.816306   15774 start.go:360] acquireMachinesLock for newest-cni-018000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:26.816355   15774 start.go:364] duration metric: took 43.333µs to acquireMachinesLock for "newest-cni-018000"
	I1030 11:43:26.816363   15774 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:43:26.816368   15774 fix.go:54] fixHost starting: 
	I1030 11:43:26.816478   15774 fix.go:112] recreateIfNeeded on newest-cni-018000: state=Stopped err=<nil>
	W1030 11:43:26.816485   15774 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:43:26.820693   15774 out.go:177] * Restarting existing qemu2 VM for "newest-cni-018000" ...
	I1030 11:43:26.828675   15774 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:26.828709   15774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e6:db:2c:56:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:26.830794   15774 main.go:141] libmachine: STDOUT: 
	I1030 11:43:26.830810   15774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:26.830836   15774 fix.go:56] duration metric: took 14.467458ms for fixHost
	I1030 11:43:26.830841   15774 start.go:83] releasing machines lock for "newest-cni-018000", held for 14.482333ms
	W1030 11:43:26.830848   15774 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:26.830895   15774 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:26.830899   15774 start.go:729] Will try again in 5 seconds ...
	I1030 11:43:31.833020   15774 start.go:360] acquireMachinesLock for newest-cni-018000: {Name:mk059f91dc009bcc4139314331cf70a12d388da5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 11:43:31.833566   15774 start.go:364] duration metric: took 457.75µs to acquireMachinesLock for "newest-cni-018000"
	I1030 11:43:31.833722   15774 start.go:96] Skipping create...Using existing machine configuration
	I1030 11:43:31.833743   15774 fix.go:54] fixHost starting: 
	I1030 11:43:31.834510   15774 fix.go:112] recreateIfNeeded on newest-cni-018000: state=Stopped err=<nil>
	W1030 11:43:31.834538   15774 fix.go:138] unexpected machine state, will restart: <nil>
	I1030 11:43:31.839990   15774 out.go:177] * Restarting existing qemu2 VM for "newest-cni-018000" ...
	I1030 11:43:31.843970   15774 qemu.go:418] Using hvf for hardware acceleration
	I1030 11:43:31.844161   15774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e6:db:2c:56:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19883-11536/.minikube/machines/newest-cni-018000/disk.qcow2
	I1030 11:43:31.855248   15774 main.go:141] libmachine: STDOUT: 
	I1030 11:43:31.855301   15774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1030 11:43:31.855406   15774 fix.go:56] duration metric: took 21.636458ms for fixHost
	I1030 11:43:31.855424   15774 start.go:83] releasing machines lock for "newest-cni-018000", held for 21.837375ms
	W1030 11:43:31.855594   15774 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-018000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-018000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1030 11:43:31.861951   15774 out.go:201] 
	W1030 11:43:31.864999   15774 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1030 11:43:31.865019   15774 out.go:270] * 
	* 
	W1030 11:43:31.867186   15774 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:43:31.875983   15774 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-018000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (74.516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-018000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (33.391208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-018000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-018000 --alsologtostderr -v=1: exit status 83 (45.0025ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-018000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-018000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:43:32.079868   15790 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:43:32.080079   15790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:32.080082   15790 out.go:358] Setting ErrFile to fd 2...
	I1030 11:43:32.080084   15790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:43:32.080202   15790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:43:32.080429   15790 out.go:352] Setting JSON to false
	I1030 11:43:32.080438   15790 mustload.go:65] Loading cluster: newest-cni-018000
	I1030 11:43:32.080668   15790 config.go:182] Loaded profile config "newest-cni-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:43:32.084535   15790 out.go:177] * The control-plane node newest-cni-018000 host is not running: state=Stopped
	I1030 11:43:32.088434   15790 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-018000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-018000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (35.076958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-018000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (34.411584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 9.01
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.49
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 8.87
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.02
55 TestFunctional/serial/CacheCmd/cache/add_local 1.03
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.24
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.23
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
107 TestFunctional/parallel/ProfileCmd/profile_list 0.09
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.7
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 4.04
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.22
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.74
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 15.74
258 TestNoKubernetes/serial/Stop 3.25
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
274 TestStartStop/group/old-k8s-version/serial/Stop 3.44
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
285 TestStartStop/group/no-preload/serial/Stop 2.17
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
296 TestStartStop/group/embed-certs/serial/Stop 3.84
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.88
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
318 TestStartStop/group/newest-cni/serial/Stop 3.72
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1030 11:17:06.546655   12043 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1030 11:17:06.546992   12043 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-089000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-089000: exit status 85 (101.811042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:16 PDT |          |
	|         | -p download-only-089000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 11:16:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 11:16:40.901220   12044 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:16:40.901402   12044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:16:40.901405   12044 out.go:358] Setting ErrFile to fd 2...
	I1030 11:16:40.901407   12044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:16:40.901540   12044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	W1030 11:16:40.901635   12044 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19883-11536/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19883-11536/.minikube/config/config.json: no such file or directory
	I1030 11:16:40.903121   12044 out.go:352] Setting JSON to true
	I1030 11:16:40.921115   12044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6371,"bootTime":1730305829,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:16:40.921184   12044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:16:40.927053   12044 out.go:97] [download-only-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:16:40.927173   12044 notify.go:220] Checking for updates...
	W1030 11:16:40.927244   12044 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball: no such file or directory
	I1030 11:16:40.929987   12044 out.go:169] MINIKUBE_LOCATION=19883
	I1030 11:16:40.933036   12044 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:16:40.938026   12044 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:16:40.940958   12044 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:16:40.944997   12044 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	W1030 11:16:40.950894   12044 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 11:16:40.951176   12044 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:16:40.953925   12044 out.go:97] Using the qemu2 driver based on user configuration
	I1030 11:16:40.953942   12044 start.go:297] selected driver: qemu2
	I1030 11:16:40.953957   12044 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:16:40.954014   12044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:16:40.956926   12044 out.go:169] Automatically selected the socket_vmnet network
	I1030 11:16:40.962508   12044 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1030 11:16:40.962633   12044 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:16:40.962681   12044 cni.go:84] Creating CNI manager for ""
	I1030 11:16:40.962734   12044 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1030 11:16:40.962790   12044 start.go:340] cluster config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:16:40.967538   12044 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:16:40.972003   12044 out.go:97] Downloading VM boot image ...
	I1030 11:16:40.972019   12044 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730282777-19883-arm64.iso
	I1030 11:16:53.467524   12044 out.go:97] Starting "download-only-089000" primary control-plane node in "download-only-089000" cluster
	I1030 11:16:53.467567   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:16:53.526800   12044 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:16:53.526825   12044 cache.go:56] Caching tarball of preloaded images
	I1030 11:16:53.527013   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:16:53.533134   12044 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1030 11:16:53.533141   12044 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:16:53.614060   12044 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1030 11:17:05.275946   12044 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:17:05.276112   12044 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:17:05.970003   12044 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1030 11:17:05.970213   12044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/download-only-089000/config.json ...
	I1030 11:17:05.970229   12044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19883-11536/.minikube/profiles/download-only-089000/config.json: {Name:mk7fc06580051dfc989c2e90aefb7130eeed8b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 11:17:05.970534   12044 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1030 11:17:05.970784   12044 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1030 11:17:06.499110   12044 out.go:193] 
	W1030 11:17:06.501928   12044 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19883-11536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340 0x107c65340] Decompressors:map[bz2:0x140003eed80 gz:0x140003eed88 tar:0x140003eed30 tar.bz2:0x140003eed40 tar.gz:0x140003eed50 tar.xz:0x140003eed60 tar.zst:0x140003eed70 tbz2:0x140003eed40 tgz:0x140003eed50 txz:0x140003eed60 tzst:0x140003eed70 xz:0x140003eed90 zip:0x140003eeda0 zst:0x140003eed98] Getters:map[file:0x14000a2e680 http:0x14000048690 https:0x140000486e0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1030 11:17:06.501956   12044 out_reason.go:110] 
	W1030 11:17:06.509168   12044 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 11:17:06.512066   12044 out.go:193] 
	
	
	* The control-plane node download-only-089000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-089000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-089000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (9.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-276000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-276000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (9.005660125s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (9.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1030 11:17:15.936133   12043 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1030 11:17:15.936189   12043 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-276000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-276000: exit status 85 (81.123041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:16 PDT |                     |
	|         | -p download-only-089000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| delete  | -p download-only-089000        | download-only-089000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT | 30 Oct 24 11:17 PDT |
	| start   | -o=json --download-only        | download-only-276000 | jenkins | v1.34.0 | 30 Oct 24 11:17 PDT |                     |
	|         | -p download-only-276000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/30 11:17:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 11:17:06.962873   12068 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:17:06.963025   12068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:06.963028   12068 out.go:358] Setting ErrFile to fd 2...
	I1030 11:17:06.963030   12068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:17:06.963151   12068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:17:06.964369   12068 out.go:352] Setting JSON to true
	I1030 11:17:06.981965   12068 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6397,"bootTime":1730305829,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:17:06.982032   12068 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:17:06.986370   12068 out.go:97] [download-only-276000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:17:06.986496   12068 notify.go:220] Checking for updates...
	I1030 11:17:06.990357   12068 out.go:169] MINIKUBE_LOCATION=19883
	I1030 11:17:06.993345   12068 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:17:06.997341   12068 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:17:07.000386   12068 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:17:07.003319   12068 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	W1030 11:17:07.009370   12068 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 11:17:07.009606   12068 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:17:07.012271   12068 out.go:97] Using the qemu2 driver based on user configuration
	I1030 11:17:07.012280   12068 start.go:297] selected driver: qemu2
	I1030 11:17:07.012284   12068 start.go:901] validating driver "qemu2" against <nil>
	I1030 11:17:07.012330   12068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1030 11:17:07.015329   12068 out.go:169] Automatically selected the socket_vmnet network
	I1030 11:17:07.020785   12068 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1030 11:17:07.020873   12068 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 11:17:07.020892   12068 cni.go:84] Creating CNI manager for ""
	I1030 11:17:07.020915   12068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1030 11:17:07.020921   12068 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 11:17:07.020993   12068 start.go:340] cluster config:
	{Name:download-only-276000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-276000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:17:07.025269   12068 iso.go:125] acquiring lock: {Name:mk5b69ba12ff67b46b5de4e90768c1ffbd4fa7e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 11:17:07.028299   12068 out.go:97] Starting "download-only-276000" primary control-plane node in "download-only-276000" cluster
	I1030 11:17:07.028309   12068 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:07.087672   12068 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1030 11:17:07.087702   12068 cache.go:56] Caching tarball of preloaded images
	I1030 11:17:07.087900   12068 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1030 11:17:07.091239   12068 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1030 11:17:07.091246   12068 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1030 11:17:07.196915   12068 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/19883-11536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-276000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-276000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-276000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-644000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-644000: exit status 85 (63.414667ms)

                                                
                                                
-- stdout --
	* Profile "addons-644000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-644000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-644000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-644000: exit status 85 (66.108209ms)

                                                
                                                
-- stdout --
	* Profile "addons-644000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-644000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1030 11:28:35.988312   12043 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1030 11:28:35.988492   12043 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1030 11:28:37.927198   12043 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1030 11:28:37.927442   12043 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1030 11:28:37.927494   12043 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit
I1030 11:28:38.427272   12043 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700 0x10914e700] Decompressors:map[bz2:0x1400070b5d0 gz:0x1400070b5d8 tar:0x1400070b4a0 tar.bz2:0x1400070b4c0 tar.gz:0x1400070b520 tar.xz:0x1400070b570 tar.zst:0x1400070b5a0 tbz2:0x1400070b4c0 tgz:0x1400070b520 txz:0x1400070b570 tzst:0x1400070b5a0 xz:0x1400070b5e0 zip:0x1400070b600 zst:0x1400070b5e8] Getters:map[file:0x140014d59f0 http:0x1400057d630 https:0x1400057d680] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1030 11:28:38.427428   12043 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3727113566/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status: exit status 7 (36.727125ms)

                                                
                                                
-- stdout --
	nospam-957000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status: exit status 7 (34.929916ms)

                                                
                                                
-- stdout --
	nospam-957000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status: exit status 7 (34.601542ms)

                                                
                                                
-- stdout --
	nospam-957000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause: exit status 83 (44.794958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause: exit status 83 (43.0485ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause: exit status 83 (44.69525ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause: exit status 83 (43.299292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause: exit status 83 (45.408875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause: exit status 83 (45.771375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-957000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-957000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (8.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop: (3.473929542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop: (3.202144666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-957000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-957000 stop: (2.192672584s)
--- PASS: TestErrorSpam/stop (8.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19883-11536/.minikube/files/etc/test/nested/copy/12043/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3870980226/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache add minikube-local-cache-test:functional-484000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 cache delete minikube-local-cache-test:functional-484000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-484000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 config get cpus: exit status 14 (35.804583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 config get cpus: exit status 14 (40.207042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-484000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (124.728167ms)

                                                
                                                
-- stdout --
	* [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:18:49.736049   12519 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:49.736219   12519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:49.736223   12519 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:49.736225   12519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:49.736353   12519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:49.737417   12519 out.go:352] Setting JSON to false
	I1030 11:18:49.755058   12519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6500,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:18:49.755138   12519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:18:49.760171   12519 out.go:177] * [functional-484000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1030 11:18:49.768153   12519 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:18:49.768199   12519 notify.go:220] Checking for updates...
	I1030 11:18:49.775167   12519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:18:49.778064   12519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:18:49.781179   12519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:18:49.784150   12519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:18:49.787120   12519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:18:49.790443   12519 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:49.790697   12519 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:18:49.795158   12519 out.go:177] * Using the qemu2 driver based on existing profile
	I1030 11:18:49.802094   12519 start.go:297] selected driver: qemu2
	I1030 11:18:49.802100   12519 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:18:49.802142   12519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:18:49.809005   12519 out.go:201] 
	W1030 11:18:49.813169   12519 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1030 11:18:49.817079   12519 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-484000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-484000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.906083ms)

                                                
                                                
-- stdout --
	* [functional-484000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 11:18:49.614354   12515 out.go:345] Setting OutFile to fd 1 ...
	I1030 11:18:49.614495   12515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:49.614498   12515 out.go:358] Setting ErrFile to fd 2...
	I1030 11:18:49.614501   12515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1030 11:18:49.614631   12515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19883-11536/.minikube/bin
	I1030 11:18:49.616134   12515 out.go:352] Setting JSON to false
	I1030 11:18:49.634753   12515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6500,"bootTime":1730305829,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1030 11:18:49.634834   12515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1030 11:18:49.640026   12515 out.go:177] * [functional-484000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1030 11:18:49.647175   12515 out.go:177]   - MINIKUBE_LOCATION=19883
	I1030 11:18:49.647228   12515 notify.go:220] Checking for updates...
	I1030 11:18:49.652139   12515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	I1030 11:18:49.655173   12515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1030 11:18:49.656452   12515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 11:18:49.659124   12515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	I1030 11:18:49.662161   12515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 11:18:49.665485   12515 config.go:182] Loaded profile config "functional-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1030 11:18:49.665746   12515 driver.go:394] Setting default libvirt URI to qemu:///system
	I1030 11:18:49.670096   12515 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1030 11:18:49.677147   12515 start.go:297] selected driver: qemu2
	I1030 11:18:49.677155   12515 start.go:901] validating driver "qemu2" against &{Name:functional-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1030 11:18:49.677218   12515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 11:18:49.684120   12515 out.go:201] 
	W1030 11:18:49.688187   12515 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1030 11:18:49.692064   12515 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 addons list
I1030 11:18:14.562056   12043 retry.go:31] will retry after 2.161455941s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.666959ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.99375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "51.052042ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.702292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.667449917s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-484000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image rm kicbase/echo-server:functional-484000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-484000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 image save --daemon kicbase/echo-server:functional-484000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-484000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01386275s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-484000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-484000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-484000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-484000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (4.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-638000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-638000 --output=json --user=testUser: (4.035483083s)
--- PASS: TestJSONOutput/stop/Command (4.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-012000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-012000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.9295ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6cedccf9-d21d-4860-851d-c5992f929e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-012000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82517ac9-acfe-4b00-b3f3-3498e63e699d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19883"}}
	{"specversion":"1.0","id":"8e5cbe03-75c0-4fa9-8e06-7576ccfb5ec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig"}}
	{"specversion":"1.0","id":"0d8677e4-ff08-47c1-93a9-25d89052660d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b81ac5b1-8f0a-40f3-9802-c84ed20e453b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c905bda3-b1fd-4065-a67f-05704f239f26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube"}}
	{"specversion":"1.0","id":"cae7b5fc-9775-4c02-bba4-29122c6ed953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1e3052af-78ca-4f48-8599-774867efdc74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-012000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-012000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-443000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.79225ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19883
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19883-11536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19883-11536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-443000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-443000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.317416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-443000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-443000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.626154375s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-443000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-443000: (3.245881667s)
--- PASS: TestNoKubernetes/serial/Stop (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-443000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-443000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.741959ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-443000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-443000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-239000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-239000 --alsologtostderr -v=3: (3.439926125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-239000 -n old-k8s-version-239000: exit status 7 (34.769083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-239000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-143000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-143000 --alsologtostderr -v=3: (2.16618725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-143000 -n no-preload-143000: exit status 7 (58.088041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-143000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-717000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-717000 --alsologtostderr -v=3: (3.842295708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-717000 -n embed-certs-717000: exit status 7 (58.15ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-717000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-194000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-194000 --alsologtostderr -v=3: (3.879198084s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-194000 -n default-k8s-diff-port-194000: exit status 7 (58.333625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-194000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-018000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-018000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-018000 --alsologtostderr -v=3: (3.716185667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-018000 -n newest-cni-018000: exit status 7 (63.463625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-018000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-877000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3928859600/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730312295465784000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3928859600/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730312295465784000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3928859600/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730312295465784000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3928859600/001/test-1730312295465784000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.193083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:15.527505   12043 retry.go:31] will retry after 294.464155ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.702416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:15.917034   12043 retry.go:31] will retry after 496.852303ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.802917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:16.508083   12043 retry.go:31] will retry after 1.626834907s: exit status 83
I1030 11:18:16.725837   12043 retry.go:31] will retry after 4.913663196s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.439542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:18.230896   12043 retry.go:31] will retry after 1.380344686s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.777167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:19.704487   12043 retry.go:31] will retry after 3.397436394s: exit status 83
I1030 11:18:21.641785   12043 retry.go:31] will retry after 7.553945649s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.820375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:23.196143   12043 retry.go:31] will retry after 4.327478307s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.112167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo umount -f /mount-9p": exit status 83 (49.821209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3928859600/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2558492470/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.002416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:27.854997   12043 retry.go:31] will retry after 553.632609ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.586875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:28.505530   12043 retry.go:31] will retry after 577.426336ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.473208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:29.177761   12043 retry.go:31] will retry after 842.627316ms: exit status 83
I1030 11:18:29.197859   12043 retry.go:31] will retry after 12.486104803s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.660291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:30.115449   12043 retry.go:31] will retry after 1.708619206s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.764542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:31.919183   12043 retry.go:31] will retry after 1.753628064s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.468292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:33.765641   12043 retry.go:31] will retry after 5.015412573s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.36975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "sudo umount -f /mount-9p": exit status 83 (48.829417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-484000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2558492470/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (83.300709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:39.135254   12043 retry.go:31] will retry after 748.38628ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (92.388916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:39.978408   12043 retry.go:31] will retry after 1.071145745s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (92.91275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:41.144874   12043 retry.go:31] will retry after 1.338536843s: exit status 83
I1030 11:18:41.686122   12043 retry.go:31] will retry after 14.191668678s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (90.758041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:42.576480   12043 retry.go:31] will retry after 1.835804405s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (87.878959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:44.502573   12043 retry.go:31] will retry after 1.350569982s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (90.71725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
I1030 11:18:45.946273   12043 retry.go:31] will retry after 2.969988444s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-484000 ssh "findmnt -T" /mount1: exit status 83 (90.0695ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-484000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-484000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2850227272/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.36s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-286000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-286000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-286000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-286000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-286000"

                                                
                                                
----------------------- debugLogs end: cilium-286000 [took: 2.373918042s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-286000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-286000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-203000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-203000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard